Skip to content

Commit b7dcabd

Browse files
authored
Merge pull request #207 from Sukriti-sood/port
port the developer installation
2 parents 01b1524 + 638dd4e commit b7dcabd

3 files changed

Lines changed: 373 additions & 2 deletions

File tree

Gemfile.lock

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -281,4 +281,4 @@ DEPENDENCIES
281281
just-the-docs
282282

283283
BUNDLED WITH
284-
2.1.4
284+
2.3.10

docs/getting-started/dev-setup.md

Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
---
2+
layout: default
3+
title: Developer Installation
4+
nav_order: 2
5+
parent: Getting Started
6+
---
7+
# Developer Installation
8+
9+
SirMordred is the tool used to coordinate the execution of the GrimoireLab platform, via two main configuration files, the [setup.cfg](./setup-cfg.md) and [projects.json](./projects-json.md), which are summarized in their corresponding sections.
10+
11+
SirModred relies on ElasticSearch, Kibiter and MySQL/MariaDB. The current versions used are:
12+
13+
- ElasticSearch 6.8.6
14+
- Kibiter 6.8.6
15+
- MySQL/MariaDB (5.7.24/10.0)
16+
17+
There are mainly 2 options to get started with SirMordred:
18+
- [Source code and docker](#source-code-and-docker-):
19+
In this method, the applications (ElasticSearch, Kibiter and MariaDB) are installed using docker and the GrimoireLab Components are installed using the source code.
20+
- [Only docker](#only-docker-):
21+
In this method, the applications (ElasticSearch, Kibiter and MariaDB) and the GrimoireLab Components are installed using docker.
22+
23+
## Source code and docker
24+
25+
### Getting the containers
26+
27+
You will have to install ElasticSearch (6.8.6), Kibiter (6.8.6) and a MySQL/MariaDB database (5.7.24/10.0). You can use the following docker-compose to have them running.
28+
29+
> Help: You need to install docker and docker-compose for this. Please refer the documentation.
30+
> - https://docs.docker.com/install/linux/docker-ce/ubuntu/
31+
> - https://docs.docker.com/compose/install/
32+
> Note:
33+
> 1. You can omit (comment/remove) the `mariadb` section in case you have MariaDB or MySQL already installed in your system.
34+
> 2. It is not mandatory to use docker to install ElasticSearch, Kibiter and MySQL/MariaDB database. They can be installed by other means too (source code). We are not much concerned about the method they are installed. Docker is the easiest way as it mostly avoids the errors caused by them.
35+
36+
**docker-compose (with SearchGuard)**
37+
38+
> **Note**: For accessing Kibiter and/or creating indexes login is required, the `username:password` is `admin:admin` in [`setup.cfg`](https://github.com/chaoss/grimoirelab-sirmordred/blob/master/sirmordred/utils/setup.cfg) file.
39+
40+
```
41+
elasticsearch:
42+
image: bitergia/elasticsearch:6.8.6-secured
43+
command: elasticsearch -Enetwork.bind_host=0.0.0.0 -Ehttp.max_content_length=2000mb
44+
ports:
45+
- 9200:9200
46+
environment:
47+
- ES_JAVA_OPTS=-Xms2g -Xmx2g
48+
kibiter:
49+
restart: on-failure:5
50+
image: bitergia/kibiter:secured-v6.8.6-3
51+
environment:
52+
- PROJECT_NAME=Demo
53+
- NODE_OPTIONS=--max-old-space-size=1000
54+
- ELASTICSEARCH_USER=kibanaserver
55+
- ELASTICSEARCH_PASSWORD=kibanaserver
56+
- ELASTICSEARCH_URL=["https://elasticsearch:9200"]
57+
- LOGIN_SUBTITLE=If you have forgotten your username or password ...
58+
links:
59+
- elasticsearch
60+
ports:
61+
- 5601:5601
62+
```
63+
64+
**docker-compose (without SearchGuard)**
65+
66+
> **Note**: Here, access to kibiter and elasticsearch don't need credentials.
67+
68+
```
69+
70+
elasticsearch:
71+
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6
72+
command: elasticsearch -Enetwork.bind_host=0.0.0.0 -Ehttp.max_content_length=2000mb
73+
ports:
74+
- 9200:9200
75+
environment:
76+
- ES_JAVA_OPTS=-Xms2g -Xmx2g
77+
- ANONYMOUS_USER=true
78+
kibiter:
79+
restart: on-failure:5
80+
image: bitergia/kibiter:community-v6.8.6-3
81+
environment:
82+
- PROJECT_NAME=Demo
83+
- NODE_OPTIONS=--max-old-space-size=1000
84+
- ELASTICSEARCH_URL=http://elasticsearch:9200
85+
links:
86+
- elasticsearch
87+
ports:
88+
- 5601:5601
89+
```
90+
91+
Save the above into a docker-compose.yml file and run
92+
```
93+
$ docker-compose up -d
94+
```
95+
to get ElasticSearch, Kibiter and MariaDB running on your system.
96+
97+
### Cloning the repositories
98+
99+
In the next step, you will need to fork all the GitHub repos below and clone them to a target local folder (e.g., `sources`).
100+
101+
- [SirModred](https://github.com/chaoss/grimoirelab-sirmordred)
102+
- [ELK](https://github.com/chaoss/grimoirelab-elk)
103+
- [Graal](https://github.com/chaoss/grimoirelab-graal)
104+
- [Perceval](https://github.com/chaoss/grimoirelab-perceval)
105+
- [Perceval for Mozilla](https://github.com/chaoss/grimoirelab-perceval-mozilla)
106+
- [Perceval for OPNFV](https://github.com/chaoss/grimoirelab-perceval-opnfv)
107+
- [Perceval for Puppet](https://github.com/chaoss/grimoirelab-perceval-puppet)
108+
- [Perceval for Weblate](https://github.com/chaoss/grimoirelab-perceval-weblate)
109+
- [SortingHat](https://github.com/chaoss/grimoirelab-sortinghat)
110+
- [Sigils](https://github.com/chaoss/grimoirelab-sigils)
111+
- [Kidash](https://github.com/chaoss/grimoirelab-kidash)
112+
- [Toolkit](https://github.com/chaoss/grimoirelab-toolkit)
113+
- [Cereslib](https://github.com/chaoss/grimoirelab-cereslib)
114+
- [Manuscripts](https://github.com/chaoss/grimoirelab-manuscripts)
115+
116+
Each local repo should have two `remotes`: `origin` points to the forked repo, while `upstream` points to the original CHAOSS repo.
117+
118+
An example is provided below.
119+
```
120+
$ git remote -v
121+
origin https://github.com/valeriocos/perceval (fetch)
122+
origin https://github.com/valeriocos/perceval (push)
123+
upstream https://github.com/chaoss/grimoirelab-perceval (fetch)
124+
upstream https://github.com/chaoss/grimoirelab-perceval (push)
125+
```
126+
127+
In order to add a remote to a Git repository, you can use the following command:
128+
```
129+
$ git remote add upstream https://github.com/chaoss/grimoirelab-perceval
130+
```
131+
132+
#### ProTip
133+
134+
You can use this use this [script](https://gist.github.com/vchrombie/4403193198cd79e7ee0079259311f6e8) to automate this whole process.
135+
```
136+
$ python3 glab-dev-env-setup.py --create --token xxxx --source sources
137+
```
138+
139+
### Setting up PyCharm
140+
141+
> Help:
142+
> You need to install PyCharm (**Community Edition**) for this. Please refer the documentation.
143+
> - https://www.jetbrains.com/help/pycharm/installation-guide.html
144+
>
145+
> You can follow this [tutorial](https://www.jetbrains.com/help/pycharm/quick-start-guide.html) to get familiar with PyCharm.
146+
Once PyCharm is installed create a project in the grimoirelab-sirmordred directory.
147+
PyCharm will automatically create a virtual env, where you should install the dependencies listed in each
148+
requirements.txt, **excluding** the ones concerning the grimoirelab components.
149+
150+
To install the dependencies, you can click on `File` -> `Settings` -> `Project` -> `Project Interpreter`, and then the `+` located on the top right corner (see figure below).
151+
152+
![project-interpreter-configuration](https://user-images.githubusercontent.com/25265451/78168870-3e612580-746e-11ea-9df1-7ba94b84d07b.gif)
153+
154+
Later, you can add the dependencies to the grimoirelab components via `File` -> `Settings` -> `Project` -> `Project Structure`.
155+
The final results should be something similar to the image below.
156+
157+
![project-structure-configuration](https://user-images.githubusercontent.com/25265451/78168879-41f4ac80-746e-11ea-9e40-dbdb1b5d32f2.gif)
158+
159+
### Execution
160+
161+
Now that you have the ElasticSearch, Kibiter and MariaDB running on your system and the project configured in the PyCharm, we can execute micro-mordred/sirmordred.
162+
163+
To execute micro-mordred, define a [setup.cfg](https://github.com/chaoss/grimoirelab-sirmordred/blob/master/sirmordred/utils/setup.cfg) and [projects.json](https://github.com/chaoss/grimoirelab-sirmordred/blob/master/sirmordred/utils/projects.json), and
164+
run the following commands, which will collect and enrich the data coming from the git sections and upload the corresponding panels to Kibiter:
165+
```
166+
micro.py --raw --enrich --cfg ./setup.cfg --backends git cocom
167+
micro.py --panels --cfg ./setup.cfg
168+
```
169+
170+
Optionally, you can create a configuration in PyCharm to speed up the executions (`Run` -> `Edit configuration` -> `+`).
171+
172+
![add-micro-configuration](https://user-images.githubusercontent.com/25265451/78168875-402ae900-746e-11ea-8bd8-4b3e68992bdf.gif)
173+
174+
The final results should be something similar to the image below.
175+
176+
![result](https://user-images.githubusercontent.com/25265451/84477839-ee90ad00-acad-11ea-932f-cc7ce81e05a7.png)
177+
178+
## Only docker
179+
180+
Follow the instruction in the GrimoireLab tutorial to have [SirMordred in a container](../../sirmordred/container.md)
181+
182+
---

docs/getting-started/troubleshooting.md

Lines changed: 190 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,12 +25,14 @@ grimoirelab/docker-compose$ docker-compose up
2525

2626
---
2727

28-
## Table of contents
28+
### Following is a list of common problems encountered while setting up GrimoireLab
2929
{: .no_toc .text-delta }
3030

3131
1. TOC
3232
{:toc}
3333

34+
> **NOTE**: In order to see the logs, run ```docker-compose up``` without the ```-d``` or ```--detach``` option while starting/(re)creating/building/attaching containers for a service.
35+
>
3436
---
3537

3638
### Port already in use
@@ -65,3 +67,190 @@ by looking at the filter bar as shown in the following screenshot or the <span
6567
style="color: #2f4bff">time window</span>.
6668

6769
![filter](./assets/filters.png)
70+
71+
### Low Virtual Memory
72+
73+
* Indications:
74+
Cannot open ```https://localhost:9200/``` in browser. Shows ```Secure connection Failed```,
75+
```PR_END_OF_FILE_ERROR```, ```SSL_ERROR_SYSCALL in connection to localhost:9200``` messages.
76+
* Diagnosis:
77+
Check for the following log in the output of ```docker-compose up```
78+
```
79+
elasticsearch_1 | ERROR: [1] bootstrap checks failed
80+
elasticsearch_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
81+
````
82+
* Solution:
83+
Increase the kernel ```max_map_count``` parameter of vm. Execute the following command
84+
```sudo sysctl -w vm.max_map_count=262144```
85+
Now stop the container services and re-run ```docker-compose up```.
86+
Note that this is valid only for current session. To set this value permanently, update the ```vm.max_map_count``` setting
87+
in /etc/sysctl.conf. To verify after rebooting, run sysctl vm.max_map_count.
88+
89+
### Processes have conflicts with SearchGuard
90+
91+
* Indications:
92+
- Cannot open ```localhost:9200``` in browser, shows ```Secure connection Failed```
93+
- ```curl -XGET localhost:9200 -k``` gives
94+
```curl: (52) Empty reply from server```
95+
* Diagnosis:
96+
Check for the following log in the output of ```docker-compose up```
97+
```
98+
elasticsearch_1 | [2020-03-12T13:05:34,959][WARN ][c.f.s.h.SearchGuardHttpServerTransport] [Xrb6LcS] Someone (/172.18.0.1:59838) speaks http plaintext instead of ssl, will close the channel
99+
```
100+
Check for conflicting processes by running ```sudo lsof -i:58888``` (e.g. 58888 is the port number)
101+
* Solution:
102+
1. Try to close the conflicting processes:
103+
You can do this easily with fuser (```sudo apt-get install fuser```),
104+
run ```fuser -k 58888/tcp``` (e.g. 58888 is the port number).
105+
Re-run ```docker-compose up``` and check if ```localhost:9200``` shows up.
106+
2. Use a [docker-compose without SearchGuard](#docker-compose-without-searchguard-):
107+
Use the docker-compose above, this doesn't include SearchGuard.
108+
Note: With this docker-compose, access to the Kibiter and ElasticSearch don't require credentials.
109+
Re-run ```docker-compose up``` and check if ```localhost:9200``` shows up.
110+
111+
### Permission Denied
112+
113+
* Indications:
114+
Can't create indices in Kibana. Nothing happens after clicking create index.
115+
* Diagnosis:
116+
Check for the following log in the output of ```docker-compose up```
117+
```
118+
elasticsearch_1 |[INFO ][c.f.s.c.PrivilegesEvaluator] No index-level perm match for User [name=readall, roles=[readall], requestedTenant=null] [IndexType [index=.kibana, type=doc]] [Action [[indices:data/write/index]]] [RolesChecked [sg_own_index, sg_readall]]
119+
elasticsearch_1 | [c.f.s.c.PrivilegesEvaluator] No permissions for {sg_own_index=[IndexType [index=.kibana, type=doc]], sg_readall=[IndexType [index=.kibana, type=doc]]}
120+
kibiter_1 | {"type":"response","@timestamp":CURRENT_TIME,"tags":[],"pid":1,"method":"post","statusCode":403,"req":{"url":"/api/saved_objects/index-pattern?overwrite=false","method":"post","headers":{"host":"localhost:5601","user-agent":YOUR_USER_AGENT,"accept":"application/json, text/plain, /","accept-language":"en-US,en;q=0.5","accept-encoding":"gzip, deflate","referer":"http://localhost:5601/app/kibana","content-type":"application/json;charset=utf-8","kbn-version":"6.1.4-1","content-length":"59","connection":"keep-alive"},"remoteAddress":YOUR_IP,"userAgent":YOUR_IP,"referer":"http://localhost:5601/app/kibana"},"res":{"statusCode":403,"responseTime":25,"contentLength":9},"message":"POST /api/saved_objects/index-pattern?overwrite=false 403 25ms - 9.0B"}
121+
```
122+
or any type of 403 error.
123+
124+
* Solution:
125+
This message generally appears when you try to create an index pattern but you are not logged in Kibana.
126+
127+
128+
129+
130+
Try logging in to Kibana (the login button is on the bottom left corner).
131+
The credentials used for login should be username: `admin` and password: `admin`.
132+
133+
### Empty Index
134+
135+
* Indications and Diagnosis:
136+
Check for the following error after executing [Micro Mordred](https://github.com/chaoss/grimoirelab-sirmordred/tree/master/sirmordred/utils/micro.py)
137+
using ```micro.py --raw --enrich --panels --cfg ./setup.cfg --backends git```(Here, using git as backend)
138+
```
139+
[git] Problem executing study enrich_areas_of_code:git, RequestError(400, 'search_phase_execution_exception', 'No mapping
140+
found for [metadata__timestamp] in order to sort on')
141+
```
142+
* Solution:
143+
This error appears when the index is empty (here, ```git-aoc_chaoss_enriched``` index is empty). An index can be empty when
144+
the local clone of the repository being analyzed is in sync with the upstream repo, so there will be no new commits to
145+
ingest to grimoirelab.
146+
147+
There are 2 methods to solve this problem:
148+
149+
Method 1: Disable the param [latest-items](https://github.com/chaoss/grimoirelab-sirmordred/blob/master/sirmordred/utils/setup.cfg#L78) by setting it to false.
150+
151+
Method 2: Delete the local clone of the repo (which is stored in ```~/.perceval/repositories```).
152+
153+
Some extra details to better understand this behavior:
154+
155+
The Git backend of perceval creates a clone of the repository (which is stored in ```~/.perceval/repositories```) and keeps the local
156+
copy in sync with the upstream one. This clone is then used to ingest the commits data to grimoirelab.
157+
Grimoirelab periodically collects data from different data sources (in this specific case, a git repository) in an incremental way.
158+
A typical execution of grimoirelab for a git repository consists of ingesting only the new commits to the platform. These
159+
commits are obtained by comparing the local copy with the upstream one, thus if the two repos are synchronized, then no
160+
commits are returned and hence Index will be empty. In the case where all commits need to be extracted even if there is already a
161+
local clone, latest-items param should be disabled. Another option is to delete the local clone (which is stored at ```~/.perceval/repositories```),
162+
and by doing so the platform will clone the repo again and extract all commits.
163+
164+
### Low File Descriptors
165+
166+
* Indications:
167+
- Cannot open ```localhost:9200``` in browser, shows ```Secure connection Failed```
168+
- ```curl -XGET localhost:9200 -k``` gives
169+
```curl: (7) Failed to connect to localhost port 9200: Connection refused```
170+
171+
* Diagnosis:
172+
Check for the following log in the output of ```docker-compose up```
173+
```
174+
elasticsearch_1 | ERROR: [1] bootstrap checks failed
175+
elasticsearch_1 | [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
176+
```
177+
* Solution:
178+
1. Increase the maximum File Descriptors (FD) enforced:
179+
180+
You can do this by running the below command.
181+
```
182+
sysctl -w fs.file-max=65536
183+
```
184+
185+
To set this value permanently, update `/etc/security/limits.conf` content to below.
186+
To verify after rebooting, run `sysctl fs.file-max`.
187+
```
188+
elasticsearch soft nofile 65536
189+
elasticsearch hard nofile 65536
190+
elasticsearch memlock unlimited
191+
```
192+
193+
1. Override `ulimit` parameters in the ElasticSearch docker configuration:
194+
195+
Add the below lines to ElasticSearch service in
196+
your compose file to override the default configurations of docker.
197+
```
198+
ulimits:
199+
nofile:
200+
soft: 65536
201+
hard: 65536
202+
```
203+
204+
### Rate Limit Exhausted
205+
206+
* Indication: See error message ```RuntimeError: Rate limit exhausted.; 3581.0 seconds to rate reset```
207+
* Solution : Enable the ```sleep-for-rate``` parameter. It increases rate by sleeping between API call retries.
208+
209+
### No Swap Space
210+
211+
* Indication: While composing docker , NO SWAP SPACE would be displayed.
212+
* Solution: Edit the ```/etc/default/grub file``` with sudo previleges.
213+
214+
```
215+
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
216+
sudo update-grub
217+
```
218+
And restart the system.
219+
220+
### SSL error
221+
222+
* Indication: localhost:9200 refuses connection error.
223+
* Diagnosis:
224+
```
225+
Retrying (Retry(total=10,connected=21,read=0,redirect=5,status=None)) after connection broken by
226+
'SSLError(SSLError{1,'[SSL: WRONG_VERSION_NUMBER] wrong version number {_ssl.c:852}'},)': /
227+
```
228+
* Solution: Change 'https' to 'http' in the setup.cfg file
229+
```
230+
[es_collection]
231+
# arthur = true
232+
# arthur_url = http://127.0.0.1:8080
233+
# redis_url = redis://localhost/8
234+
url = http://localhost:9200
235+
[es_enrichment]
236+
url = http://localhost:9200
237+
```
238+
239+
### Cloc installation
240+
241+
* Diagnosis:
242+
```
243+
: [Errno 2]No such file or directory : 'cloc': 'cloc'
244+
```
245+
* Solution:
246+
Execute the following command to install `cloc` (more details are available in the [Graal](https://github.com/chaoss/grimoirelab-graal#how-to-installcreate-the-executables) repo)
247+
```
248+
sudo apt-get install cloc
249+
```
250+
### Incomplete data
251+
252+
* Indication: Not all the data is being retrieved when rebuilding an index - only from a point in time forward.
253+
* Diagnosis: After a rebuild of git-based indices you are not receiving a full dataset as expected, but only from the date of the re-index forward. That data is complete, but anything prior to that is missing.
254+
* Solution: The `setup.cfg` file has an option under the Git configuration section: `latest-items = true` - set this to `latest-items = false` to pull in all data from the beginnning. Once this has been processed, remember to set it back to `latest-items = true`!
255+
256+
---

0 commit comments

Comments
 (0)