@@ -28,10 +28,17 @@ This microservice processes kafka events related to challenges and updates data
28
28
29
29
## Configuration
30
30
31
- Configuration for the processor is at ` config/default.js ` .
31
+ Configuration for the processor is at ` config/default.js ` and ` config/production.js ` .
32
32
The following parameters can be set in config files or in env variables:
33
33
34
+ - DISABLE_LOGGING: whether to disable logging, default is false
34
35
- LOG_LEVEL: the log level; default value: 'debug'
36
+ - AUTH0_URL: AUTH0 URL, used to get M2M token
37
+ - AUTH0_AUDIENCE: AUTH0 audience, used to get M2M token, default value is 'https://www.topcoder-dev.com '
38
+ - TOKEN_CACHE_TIME: AUTH0 token cache time, used to get M2M token
39
+ - AUTH0_PROXY_SERVER_URL: Auth0 proxy server url, used to get TC M2M token
40
+ - AUTH0_CLIENT_ID: AUTH0 client id, used to get M2M token
41
+ - AUTH0_CLIENT_SECRET: AUTH0 client secret, used to get M2M token
35
42
- KAFKA_URL: comma separated Kafka hosts; default value: 'localhost:9092'
36
43
- KAFKA_GROUP_ID: the Kafka group id; default value: 'challenge-processor-es'
37
44
- KAFKA_CLIENT_CERT: Kafka connection certificate, optional; default value is undefined;
@@ -42,13 +49,29 @@ if not provided, then SSL connection is not used, direct insecure connection is
42
49
if provided, it can be either path to private key file or private key content
43
50
- UPDATE_DATA_TOPIC: update data Kafka topic, default value is 'challenge.notification.update'
44
51
- CREATE_RESOURCE_TOPIC: create resource Kafka topic, default value is 'challenge.action.resource.create'
52
+ - UPDATE_RESOURCE_TOPIC: update resource Kafka topic, default value is 'challenge.action.resource.update'
45
53
- DELETE_RESOURCE_TOPIC: delete resource Kafka topic, default value is 'challenge.action.resource.delete'
46
54
- CREATE_SUBMISSION_TOPIC: create submission Kafka topic, default value is 'submission.notification.create'
55
+ - UPDATE_SUBMISSION_TOPIC: update submission Kafka topic, default value is 'submission.notification.update'
47
56
- DELETE_SUBMISSION_TOPIC: delete submission Kafka topic, default value is 'submission.notification.delete'
48
- - REGISTRANT_ROLE_ID: challenge registrant role id, if not provided then any role is considered as registrant
57
+ - REGISTRANT_RESOURCE_ROLE_ID: challenge registrant resource role id, if not provided then any role is considered as registrant
58
+ - SUBMISSIONS_API_URL: TC submissions API URL, default value is mock API 'http://localhost:4000/v5/submissions '
59
+ - RESOURCES_API_URL: TC resources API URL, default value is mock API 'http://localhost:4000/v5/resources '
60
+ - CONTEST_SUBMISSION_TYPE: contest submission type name, default value is 'Contest Submission'
61
+ - CHECKPOINT_SUBMISSION_TYPE: checkpoint submission type name, default value is 'Checkpoint Submission'
62
+ - REQUEST_TIMEOUT: superagent request timeout in milliseconds, default value is 20000
49
63
- esConfig: config object for Elasticsearch
50
64
51
65
Refer to ` esConfig ` variable in ` config/default.js ` for ES related configuration.
66
+
67
+ Set the following environment variables so that the app can get TC M2M token (use 'set' insted of 'export' for Windows OS):
68
+ ```
69
+ export AUTH0_CLIENT_ID=EkE9qU3Ey6hdJwOsF1X0duwskqcDuElW
70
+ export AUTH0_CLIENT_SECRET=Iq7REiEacFmepPh0UpKoOmc6u74WjuoJriLayeVnt311qeKNBvhRNBe9BZ8WABYk
71
+ export AUTH0_URL=https://topcoder-dev.auth0.com/oauth/token
72
+ export AUTH0_AUDIENCE=https://m2m.topcoder-dev.com/
73
+ ```
74
+
52
75
Also note that there is a ` /health ` endpoint that checks for the health of the app. This sets up an expressjs server and listens on the environment variable ` PORT ` . It's not part of the configuration file and needs to be passed as an environment variable
53
76
54
77
Config for tests are at ` config/test.js ` , it overrides some default config.
@@ -86,10 +109,14 @@ below provides details to setup Kafka server in Mac, Windows will use bat comman
86
109
87
110
` bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic challenge.action.resource.create `
88
111
112
+ ` bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic challenge.action.resource.update `
113
+
89
114
` bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic challenge.action.resource.delete `
90
115
91
116
` bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic submission.notification.create `
92
117
118
+ ` bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic submission.notification.update `
119
+
93
120
` bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic submission.notification.delete `
94
121
95
122
- verify that the topics are created:
@@ -103,7 +130,7 @@ it should list out the created topics
103
130
104
131
in the console, write message, one message per line:
105
132
106
- ` { "topic": "challenge.notification.update", "originator": "challenge-api", "timestamp": "2019-02-17T01:00:00", "mime-type": "application/json", "payload": { "id": "173803d3-019e-4033-b1cf-d7205c7f774c ", "typeId": "8e17090c-465b-4c17-b6d9-dfa16300b0ff", "track": "Code", "name": "test3", "description": "desc3", "timelineTemplateId": "8e17090c-465b-4c17-b6d9-dfa16300b0dd", "groups": ["group2", "group3"], "updated": "2019-02-17T01:00:00", "updatedBy": "admin" } } `
133
+ ` { "topic": "challenge.notification.update", "originator": "challenge-api", "timestamp": "2019-02-17T01:00:00", "mime-type": "application/json", "payload": { "id": "7b37a31e-484c-4d1e-aa9f-cfd6656e11d8 ", "typeId": "8e17090c-465b-4c17-b6d9-dfa16300b0ff", "track": "Code", "name": "test3", "description": "desc3", "timelineTemplateId": "8e17090c-465b-4c17-b6d9-dfa16300b0dd", "groups": ["group2", "group3"], "updated": "2019-02-17T01:00:00", "updatedBy": "admin" } } `
107
134
108
135
- optionally, use another terminal, go to same directory, start a consumer to view the messages:
109
136
@@ -115,6 +142,7 @@ in the console, write message, one message per line:
115
142
- in the ` docker-es ` folder, run ` docker-compose up `
116
143
117
144
### Local deployment without Docker
145
+ - start mock API, go to ` mock ` folder, run ` npm i ` and ` npm start ` , mock api is running at ` http://localhost:4000 `
118
146
- install dependencies ` npm i `
119
147
- run code lint check ` npm run lint ` , running ` npm run lint:fix ` can fix some lint errors if any
120
148
- initialize Elasticsearch, create configured Elasticsearch index if not present: ` npm run init-es `
@@ -145,7 +173,6 @@ docker-compose up
145
173
Test configuration is at ` config/test.js ` . You don't need to change them.
146
174
The following test parameters can be set in config file or in env variables:
147
175
148
- - REGISTRANT_ROLE_ID: challenge registrant role id, if not provided then any role is considered as registrant
149
176
- esConfig: config object for Elasticsearch
150
177
151
178
Integration tests use different index ` challenge-test ` which is not same as the usual index ` challenge ` .
@@ -157,8 +184,8 @@ export ES_INDEX=challenge-test
157
184
Or, you may temporarily modify the esConfig.ES_INDEX in config/default.js to ` challenge-test ` and run ` npm run init-es ` to create test index.
158
185
159
186
### Prepare
160
- - Start Local services .
161
- - initialize Elasticsearch.
187
+ - Mock API should be started .
188
+ - Initialize Elasticsearch.
162
189
- Various config parameters should be properly set.
163
190
164
191
### Running unit tests
0 commit comments