Skip to content

Commit f3610a8

Browse files
authored
Merge pull request #123 from topcoder-platform/revert-121-feature/improve-local-setup
Revert "Winner submission for Topcoder TaaS API - Improve Local Setup"
2 parents c76ed1d + b067dc7 commit f3610a8

File tree

12 files changed

+95
-454
lines changed

12 files changed

+95
-454
lines changed

README.md

Lines changed: 75 additions & 177 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,68 @@
55
- nodejs https://nodejs.org/en/ (v12+)
66
- PostgreSQL
77
- ElasticSearch (7.x)
8-
- Zookeeper
9-
- Kafka
10-
- Docker(version 20.10 and above)
8+
- Docker
119
- Docker-Compose
1210

11+
## Configuration
12+
13+
Configuration for the application is at `config/default.js`.
14+
15+
The following parameters can be set in config files or in env variables:
16+
17+
- `LOG_LEVEL`: the log level, default is 'debug'
18+
- `PORT`: the server port, default is 3000
19+
- `BASE_PATH`: the server api base path
20+
- `AUTH_SECRET`: The authorization secret used during token verification.
21+
- `VALID_ISSUERS`: The valid issuer of tokens, a json array contains valid issuer.
22+
23+
- `AUTH0_URL`: Auth0 URL, used to get TC M2M token
24+
- `AUTH0_AUDIENCE`: Auth0 audience, used to get TC M2M token
25+
- `AUTH0_AUDIENCE_UBAHN`: Auth0 audience for U-Bahn
26+
- `TOKEN_CACHE_TIME`: Auth0 token cache time, used to get TC M2M token
27+
- `AUTH0_CLIENT_ID`: Auth0 client id, used to get TC M2M token
28+
- `AUTH0_CLIENT_SECRET`: Auth0 client secret, used to get TC M2M token
29+
- `AUTH0_PROXY_SERVER_URL`: Proxy Auth0 URL, used to get TC M2M token
30+
31+
- `m2m.M2M_AUDIT_USER_ID`: default value is `00000000-0000-0000-0000-000000000000`
32+
- `m2m.M2M_AUDIT_HANDLE`: default value is `TopcoderService`
33+
34+
- `DATABASE_URL`: PostgreSQL database url.
35+
- `DB_SCHEMA_NAME`: string - PostgreSQL database target schema
36+
- `PROJECT_API_URL`: the project service url
37+
- `TC_API`: the Topcoder v5 url
38+
- `ORG_ID`: the organization id
39+
- `TOPCODER_SKILL_PROVIDER_ID`: the referenced skill provider id
40+
41+
- `esConfig.HOST`: the elasticsearch host
42+
- `esConfig.ES_INDEX_JOB`: the job index
43+
- `esConfig.ES_INDEX_JOB_CANDIDATE`: the job candidate index
44+
- `esConfig.ES_INDEX_RESOURCE_BOOKING`: the resource booking index
45+
- `esConfig.AWS_REGION`: The Amazon region to use when using AWS Elasticsearch service
46+
- `esConfig.ELASTICCLOUD.id`: The elastic cloud id, if your elasticsearch instance is hosted on elastic cloud. DO NOT provide a value for ES_HOST if you are using this
47+
- `esConfig.ELASTICCLOUD.username`: The elastic cloud username for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
48+
- `esConfig.ELASTICCLOUD.password`: The elastic cloud password for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
49+
50+
- `BUSAPI_URL`: Topcoder Bus API URL
51+
- `KAFKA_ERROR_TOPIC`: The error topic at which bus api will publish any errors
52+
- `KAFKA_MESSAGE_ORIGINATOR`: The originator value for the kafka messages
53+
54+
- `TAAS_JOB_CREATE_TOPIC`: the create job entity Kafka message topic
55+
- `TAAS_JOB_UPDATE_TOPIC`: the update job entity Kafka message topic
56+
- `TAAS_JOB_DELETE_TOPIC`: the delete job entity Kafka message topic
57+
- `TAAS_JOB_CANDIDATE_CREATE_TOPIC`: the create job candidate entity Kafka message topic
58+
- `TAAS_JOB_CANDIDATE_UPDATE_TOPIC`: the update job candidate entity Kafka message topic
59+
- `TAAS_JOB_CANDIDATE_DELETE_TOPIC`: the delete job candidate entity Kafka message topic
60+
- `TAAS_RESOURCE_BOOKING_CREATE_TOPIC`: the create resource booking entity Kafka message topic
61+
- `TAAS_RESOURCE_BOOKING_UPDATE_TOPIC`: the update resource booking entity Kafka message topic
62+
- `TAAS_RESOURCE_BOOKING_DELETE_TOPIC`: the delete resource booking entity Kafka message topic
63+
64+
65+
## PostgreSQL Database Setup
66+
- Go to https://www.postgresql.org/ download and install the PostgreSQL.
67+
- Modify `DATABASE_URL` under `config/default.js` to meet your environment.
68+
- Run `npm run init-db` to create table(run `npm run init-db force` to force creating table)
69+
1370
## DB Migration
1471
- `npm run migrate`: run any migration files which haven't run yet.
1572
- `npm run migrate:undo`: revert most recent migration.
@@ -23,186 +80,27 @@ The following parameters can be set in the config file or via env variables:
2380
- `database`: set via env `DB_NAME`; datebase name
2481
- `host`: set via env `DB_HOST`; datebase host name
2582

26-
### Steps to run locally
27-
1. 📦 Install npm dependencies
28-
29-
```bash
30-
npm install
31-
```
32-
33-
2. ⚙ Local config
34-
35-
1. In the root directory create `.env` file with the next environment variables. Values for **Auth0 config** should be shared with you on the forum.<br>
36-
```bash
37-
# Auth0 config
38-
AUTH0_URL=
39-
AUTH0_AUDIENCE=
40-
AUTH0_AUDIENCE_UBAHN=
41-
AUTH0_CLIENT_ID=
42-
AUTH0_CLIENT_SECRET=
43-
AUTH0_PROXY_SERVER_URL=
44-
45-
# Locally deployed services (via docker-compose)
46-
ES_HOST=http://dockerhost:9200
47-
DATABASE_URL=postgres://postgres:postgres@dockerhost:5432/postgres
48-
BUSAPI_URL=http://dockerhost:8002/v5
49-
```
50-
51-
- Values from this file would be automatically used by many `npm` commands.
52-
- ⚠️ Never commit this file or its copy to the repository!
53-
54-
1. Set `dockerhost` to point the IP address of Docker. Docker IP address depends on your system. For example if docker is run on IP `127.0.0.1` add a the next line to your `/etc/hosts` file:
55-
```
56-
127.0.0.1 dockerhost
57-
```
58-
59-
Alternatively, you may update `.env` file and replace `dockerhost` with your docker IP address.
60-
61-
1. 🚢 Start docker-compose with services which are required to start Taas API locally
62-
63-
*(NOTE Please ensure that you have installed docker of version 20.10 or above since the docker-compose file uses new feature introduced by docker version 20.10. Run `docker --version` to check your docker version.)*
83+
## ElasticSearch Setup
84+
- Go to https://www.elastic.co/downloads/ download and install the elasticsearch.
85+
- Modify `esConfig` under `config/default.js` to meet your environment.
86+
- Run `npm run create-index` to create ES index.
87+
- Run `npm run delete-index` to delete ES index.
6488

65-
```bash
66-
npm run services:up
67-
```
89+
## Local Deployment
6890

69-
Wait until all containers are fully started. As a good indicator, wait until `es-processor` successfully started by viewing its logs:
70-
71-
```bash
72-
npm run services:logs -- -f es-processor
73-
```
74-
75-
<details><summary>🖱️ Click to see a good logs example</summary>
91+
- Install dependencies `npm install`
92+
- Run lint `npm run lint`
93+
- Run lint fix `npm run lint:fix`
94+
- Clear and init db `npm run init-db force`
95+
- Clear and create es index
7696

7797
``` bash
78-
tc-taas-es-processor | Waiting for kafka-client to exit....
79-
tc-taas-es-processor | kafka-client exited!
80-
tc-taas-es-processor |
81-
tc-taas-es-processor | > taas-es-processor@1.0.0 start /opt/app
82-
tc-taas-es-processor | > node src/app.js
83-
tc-taas-es-processor |
84-
tc-taas-es-processor | [2021-01-21T02:44:43.442Z] app INFO : Starting kafka consumer
85-
tc-taas-es-processor | 2021-01-21T02:44:44.534Z INFO no-kafka-client Joined group taas-es-processor generationId 1 as no-kafka-client-70c25a43-af93-495e-a123-0c4f4ea389eb
86-
tc-taas-es-processor | 2021-01-21T02:44:44.534Z INFO no-kafka-client Elected as group leader
87-
tc-taas-es-processor | 2021-01-21T02:44:44.614Z DEBUG no-kafka-client Subscribed to taas.jobcandidate.create:0 offset 0 leader kafka:9093
88-
tc-taas-es-processor | 2021-01-21T02:44:44.615Z DEBUG no-kafka-client Subscribed to taas.job.create:0 offset 0 leader kafka:9093
89-
tc-taas-es-processor | 2021-01-21T02:44:44.615Z DEBUG no-kafka-client Subscribed to taas.resourcebooking.delete:0 offset 0 leader kafka:9093
90-
tc-taas-es-processor | 2021-01-21T02:44:44.616Z DEBUG no-kafka-client Subscribed to taas.jobcandidate.delete:0 offset 0 leader kafka:9093
91-
tc-taas-es-processor | 2021-01-21T02:44:44.616Z DEBUG no-kafka-client Subscribed to taas.jobcandidate.update:0 offset 0 leader kafka:9093
92-
tc-taas-es-processor | 2021-01-21T02:44:44.617Z DEBUG no-kafka-client Subscribed to taas.resourcebooking.create:0 offset 0 leader kafka:9093
93-
tc-taas-es-processor | 2021-01-21T02:44:44.617Z DEBUG no-kafka-client Subscribed to taas.job.delete:0 offset 0 leader kafka:9093
94-
tc-taas-es-processor | 2021-01-21T02:44:44.618Z DEBUG no-kafka-client Subscribed to taas.job.update:0 offset 0 leader kafka:9093
95-
tc-taas-es-processor | 2021-01-21T02:44:44.618Z DEBUG no-kafka-client Subscribed to taas.resourcebooking.update:0 offset 0 leader kafka:9093
96-
tc-taas-es-processor | [2021-01-21T02:44:44.619Z] app INFO : Initialized.......
97-
tc-taas-es-processor | [2021-01-21T02:44:44.623Z] app INFO : taas.job.create,taas.job.update,taas.job.delete,taas.jobcandidate.create,taas.jobcandidate.update,taas.jobcandidate.delete,taas.resourcebooking.create,taas.resourcebooking.update,taas.resourcebooking.delete
98-
tc-taas-es-processor | [2021-01-21T02:44:44.623Z] app INFO : Kick Start.......
99-
tc-taas-es-processor | ********** Topcoder Health Check DropIn listening on port 3001
100-
tc-taas-es-processor | Topcoder Health Check DropIn started and ready to roll
98+
npm run delete-index # run this if you already created index
99+
npm run create-index
101100
```
102101

103-
</details>
104-
105-
If you want to learn more about docker-compose configuration
106-
<details><summary>🖱️ Click to see more details here</summary>
107-
<br>
108-
109-
This docker-compose file starts the next services:
110-
| Service | Name | Port |
111-
|----------|:-----:|:----:|
112-
| PostgreSQL | db | 5432 |
113-
| Elasticsearch | esearch | 9200 |
114-
| Zookeeper | zookeeper | 2181 |
115-
| Kafka | kafka | 9092 |
116-
| [tc-bus-api](https://github.com/topcoder-platform/tc-bus-api) | bus-api | 8002 |
117-
| [taas-es-processor](https://github.com/topcoder-platform/taas-es-processor) | es-processor | 5000 |
118-
119-
- as many of the Topcoder services in this docker-compose require Auth0 configuration for M2M calls, our docker-compose file passes environment variables `AUTH0_CLIENT_ID`, `AUTH0_CLIENT_SECRET`, `AUTH0_URL`, `AUTH0_AUDIENCE`, `AUTH0_PROXY_SERVER_URL` to its containers. docker-compose takes them from `.env` file if provided.
120-
121-
- `docker-compose` automatically would create Kafka topics which are used by `taas-apis` listed in `./local/kafka-client/topics.txt`.
122-
123-
- To view the logs from any container inside docker-compose use the following command, replacing `SERVICE_NAME` with the corresponding value under the **Name** column in the above table:
124-
125-
```bash
126-
npm run services:logs -- -f SERVICE_NAME
127-
```
128-
129-
- If you want to modify the code of any of the services which are run inside this docker-compose file, you can stop such service inside docker-compose by command `docker-compose -f local/docker-compose.yaml stop <SERVICE_NAME>` and run the service separately, following its README file.<br /><br />
130-
*NOTE: If kafka(along with zookeeper) is stopped and brings up in the host machine you will need to restart the `es-processor` service by running `docker-compose -f local/docker-compose.yaml restart es-processor` so the processor will connect with the new zookeeper.*
131-
132-
*NOTE: In production these dependencies / services are hosted & managed outside Taas API.*
133-
134-
2. ♻ Init DB and ES
135-
136-
```bash
137-
npm run local:init
138-
```
139-
140-
This command will do 2 things:
141-
- create Database tables
142-
- create Elasticsearch indexes
143-
144-
3. 🚀 Start Taas API
145-
146-
```bash
147-
npm run dev
148-
```
149-
150-
Runs the Taas API using nodemon, so it would be restarted after any of the files is updated.
151-
The API will be served on `http://localhost:3000`.
152-
153-
## NPM Commands
154-
155-
| Command | Description |
156-
| -- | -- |
157-
| `npm start` | Start app. |
158-
| `npm run dev` | Start app using `nodemon`. |
159-
| `npm run lint` | Check for for lint errors. |
160-
| `npm run lint:fix` | Check for for lint errors and fix error automatically when possible. |
161-
| `npm run services:up` | Start services via docker-compose for local development. |
162-
| `npm run services:down` | Stop services via docker-compose for local development. |
163-
| `npm run services:logs -- -f <service_name>` | View logs of some service inside docker-compose. |
164-
| `npm run local:init` | Create Database and Elasticsearch indexes. |
165-
| `npm run init-db` | Create database. |
166-
| `npm run init-db force` | Force re-creating database. |
167-
| `npm run create-index` | Create Elasticsearch indexes. |
168-
| `npm run delete-index` | Delete Elasticsearch indexes. |
169-
| `npm run migrate` | Run DB migration. |
170-
| `npm run migrate:undo` | Undo DB migration executed previously |
171-
| `npm run test-data` | Insert test data. |
172-
| `npm run test` | Run tests. |
173-
| `npm run cov` | Run test with coverage. |
174-
175-
## Kafka Commands
176-
177-
You can use the following commands to manipulate kafka topics and messages:
178-
179-
(Replace `TOPIC_NAME` with the name of the desired topic)
180-
181-
### Create Topic
182-
183-
```bash
184-
docker exec tc-taas-kafka /opt/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic TOPIC_NAME
185-
```
186-
187-
### List Topics
188-
189-
```bash
190-
docker exec tc-taas-kafka /opt/kafka/bin/kafka-topics.sh --list --zookeeper zookeeper:2181
191-
```
192-
193-
### Watch Topic
194-
195-
```bash
196-
docker exec tc-taas-kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TOPIC_NAME
197-
```
198-
199-
### Post Message to Topic (from stdin)
200-
201-
```bash
202-
docker exec -it tc-taas-kafka /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TOPIC_NAME
203-
```
204-
205-
- Enter or copy/paste the message into the console after starting this command.
102+
- Start app `npm start`
103+
- App is running at `http://localhost:3000`
206104

207105
## Local Deployment with Docker
208106

config/default.js

Lines changed: 1 addition & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -1,104 +1,63 @@
1+
require('dotenv').config()
12
module.exports = {
2-
// the log level
33
LOG_LEVEL: process.env.LOG_LEVEL || 'debug',
4-
// the server port
54
PORT: process.env.PORT || 3000,
6-
// the server api base path
75
BASE_PATH: process.env.BASE_PATH || '/api/v5',
86

9-
// The authorization secret used during token verification.
107
AUTH_SECRET: process.env.AUTH_SECRET || 'mysecret',
11-
// The valid issuer of tokens, a json array contains valid issuer.
128
VALID_ISSUERS: process.env.VALID_ISSUERS || '["https://api.topcoder-dev.com", "https://api.topcoder.com", "https://topcoder-dev.auth0.com/", "https://auth.topcoder-dev.com/"]',
13-
// Auth0 URL, used to get TC M2M token
149
AUTH0_URL: process.env.AUTH0_URL,
15-
// Auth0 audience, used to get TC M2M token
1610
AUTH0_AUDIENCE: process.env.AUTH0_AUDIENCE,
17-
// Auth0 audience for U-Bahn
1811
AUTH0_AUDIENCE_UBAHN: process.env.AUTH0_AUDIENCE_UBAHN,
19-
// Auth0 token cache time, used to get TC M2M token
2012
TOKEN_CACHE_TIME: process.env.TOKEN_CACHE_TIME,
21-
// Auth0 client id, used to get TC M2M token
2213
AUTH0_CLIENT_ID: process.env.AUTH0_CLIENT_ID,
23-
// Auth0 client secret, used to get TC M2M token
2414
AUTH0_CLIENT_SECRET: process.env.AUTH0_CLIENT_SECRET,
25-
// Proxy Auth0 URL, used to get TC M2M token
2615
AUTH0_PROXY_SERVER_URL: process.env.AUTH0_PROXY_SERVER_URL,
2716

2817
m2m: {
29-
// default user ID for m2m user
3018
M2M_AUDIT_USER_ID: process.env.M2M_AUDIT_USER_ID || '00000000-0000-0000-0000-000000000000',
31-
// default handle name for m2m user
3219
M2M_AUDIT_HANDLE: process.env.M2M_AUDIT_HANDLE || 'TopcoderService'
3320
},
3421

35-
// the Topcoder v5 url
3622
TC_API: process.env.TC_API || 'https://api.topcoder-dev.com/v5',
37-
// the organization id
3823
ORG_ID: process.env.ORG_ID || '36ed815b-3da1-49f1-a043-aaed0a4e81ad',
39-
// the referenced skill provider id
4024
TOPCODER_SKILL_PROVIDER_ID: process.env.TOPCODER_SKILL_PROVIDER_ID || '9cc0795a-6e12-4c84-9744-15858dba1861',
4125

42-
// the TC API for v3 users
4326
TOPCODER_USERS_API: process.env.TOPCODER_USERS_API || 'https://api.topcoder-dev.com/v3/users',
4427

45-
// PostgreSQL database url.
4628
DATABASE_URL: process.env.DATABASE_URL || 'postgres://postgres:postgres@localhost:5432/postgres',
47-
// string - PostgreSQL database target schema
4829
DB_SCHEMA_NAME: process.env.DB_SCHEMA_NAME || 'bookings',
49-
// the project service url
5030
PROJECT_API_URL: process.env.PROJECT_API_URL || 'https://api.topcoder-dev.com',
5131

5232
esConfig: {
53-
// the elasticsearch host
5433
HOST: process.env.ES_HOST || 'http://localhost:9200',
5534

5635
ELASTICCLOUD: {
57-
// The elastic cloud id, if your elasticsearch instance is hosted on elastic cloud. DO NOT provide a value for ES_HOST if you are using this
5836
id: process.env.ELASTICCLOUD_ID,
59-
// The elastic cloud username for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
6037
username: process.env.ELASTICCLOUD_USERNAME,
61-
// The elastic cloud password for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
6238
password: process.env.ELASTICCLOUD_PASSWORD
6339
},
6440

65-
// The Amazon region to use when using AWS Elasticsearch service
6641
AWS_REGION: process.env.AWS_REGION || 'us-east-1', // AWS Region to be used if we use AWS ES
6742

68-
// the job index
6943
ES_INDEX_JOB: process.env.ES_INDEX_JOB || 'job',
70-
// // The elastic cloud id, if your elasticsearch instance is hosted on elastic cloud. DO NOT provide a value for ES_HOST if you are using this
71-
// the job candidate index
7244
ES_INDEX_JOB_CANDIDATE: process.env.ES_INDEX_JOB_CANDIDATE || 'job_candidate',
73-
// the resource booking index
7445
ES_INDEX_RESOURCE_BOOKING: process.env.ES_INDEX_RESOURCE_BOOKING || 'resource_booking'
7546
},
7647

77-
// Topcoder Bus API URL
7848
BUSAPI_URL: process.env.BUSAPI_URL || 'https://api.topcoder-dev.com/v5',
79-
// The error topic at which bus api will publish any errors
8049
KAFKA_ERROR_TOPIC: process.env.KAFKA_ERROR_TOPIC || 'common.error.reporting',
81-
// The originator value for the kafka messages
8250
KAFKA_MESSAGE_ORIGINATOR: process.env.KAFKA_MESSAGE_ORIGINATOR || 'taas-api',
8351
// topics for job service
84-
// the create job entity Kafka message topic
8552
TAAS_JOB_CREATE_TOPIC: process.env.TAAS_JOB_CREATE_TOPIC || 'taas.job.create',
86-
// the update job entity Kafka message topic
8753
TAAS_JOB_UPDATE_TOPIC: process.env.TAAS_JOB_UPDATE_TOPIC || 'taas.job.update',
88-
// the delete job entity Kafka message topic
8954
TAAS_JOB_DELETE_TOPIC: process.env.TAAS_JOB_DELETE_TOPIC || 'taas.job.delete',
9055
// topics for jobcandidate service
91-
// the create job candidate entity Kafka message topic
9256
TAAS_JOB_CANDIDATE_CREATE_TOPIC: process.env.TAAS_JOB_CANDIDATE_CREATE_TOPIC || 'taas.jobcandidate.create',
93-
// the update job candidate entity Kafka message topic
9457
TAAS_JOB_CANDIDATE_UPDATE_TOPIC: process.env.TAAS_JOB_CANDIDATE_UPDATE_TOPIC || 'taas.jobcandidate.update',
95-
// the delete job candidate entity Kafka message topic
9658
TAAS_JOB_CANDIDATE_DELETE_TOPIC: process.env.TAAS_JOB_CANDIDATE_DELETE_TOPIC || 'taas.jobcandidate.delete',
9759
// topics for job service
98-
// the create resource booking entity Kafka message topic
9960
TAAS_RESOURCE_BOOKING_CREATE_TOPIC: process.env.TAAS_RESOURCE_BOOKING_CREATE_TOPIC || 'taas.resourcebooking.create',
100-
// the update resource booking entity Kafka message topic
10161
TAAS_RESOURCE_BOOKING_UPDATE_TOPIC: process.env.TAAS_RESOURCE_BOOKING_UPDATE_TOPIC || 'taas.resourcebooking.update',
102-
// the delete resource booking entity Kafka message topic
10362
TAAS_RESOURCE_BOOKING_DELETE_TOPIC: process.env.TAAS_RESOURCE_BOOKING_DELETE_TOPIC || 'taas.resourcebooking.delete'
10463
}

0 commit comments

Comments
 (0)