This is an old revision of the document!


Importing new episode data into MongoDB (Additional information)

If you record data in the Virtual Reality using RobCog, you essentially get the following files: RawData_ID.json ← this contains all the recorded events that happened, where everything was, who moved what where when…etc. SemanticMap.owl ← contains all information about the environment. Where are the tables spawned before they are manipulated? etc.

 EventData
   |-> EventData_ID.owl <- contains the times of the events. Which event occurred when, what events are possible...
   |-> Timeline_ID.html <- same as the above but as a visualized overview

Each episode gets a random ID generated, therefore replace ID here with whatever your data's ID is. In order to be able to access this data in KnowRob, we need to load it into our local MongoDB, since this is where all data is kept for KnowRob to access. Unfortunately, at the point in time where this tutorial is being written, MongoDB does not support the import of large files. The recorded episode can become fairly large if one performs multiple pick and place tasks. We can work around it, by splitting it up into multiple parts, which can be loaded into the MongoDB individually. For that, we need a little program called jq. If you don't have it, you can install it from here. It allows us to manipulate json files from command line. Then you can do the following in the directory where your RawData_ID.json is:

$ mkdir split
$ cd split
$ cat ../RawData_ID.json | jq -c -M '.' | split -l 2000

We create a new directory called split, cd into it, access the file with cat then we use jq to make the file more compact (-c), disable colorful output to the shell (-M) and format the output in a basic way (.). Then it's fed to split which creates many files, which are each about 2000 lines long, since that is a size MongoDB is comfortable with. After this, you should see many files in your split directory, named xaa, xab, xac… you get the idea. The amount of files you get depends on the size of your original .json file. Now we can import the files onto the databse.

$ mongoimport --db DB-NAME --collection COLLECTION-NAME  --file FILE-NAME

example:

$ mongoimport --db Own-Episodes_set-clean-table --collection RawData_cUCM  --file xaa

I keep everything in one database, and name the collection according to the RawData_ID name, in order to not forget what is what. You can name it however you like. IF you consider importing all the files individually fairly tedious, you can write a script for it. If you do, let us know. Didn't get around to do that yet. Now we can create a mongodump:

$ mongodump --db DB-NAME --collection COLLECTION-NAME
$ mongodump --db Own-Episodes_set-clean-table --collection RawData_cUCM

Then you get a dump directory, which contains a RawData_ID.bson file and a RawData_ID.metadata.json

After this, you can just look at how the other episodes and their directories are structured, and create the directories for your data the same way.

Should you ever for some reason need to directly import a *.bson file, you can do so as well, using mongorestore:

$ mongorestore -d DB-NAME -c COLLECTION-NAME FILE-NAME.bson
$ mongorestore -d Own-Episodes_set-clean-table -c RawData_qtzg RawData_qtzg.bson