Urchin is a Python package stored on PyPI, the following code needs to be run the first time you use Urchin in a Python environment.
Urchin’s full documentation can be found on our website.
#Installing urchin !pip install oursin -U
Setup Urchin and open the renderer webpage
By default Urchin opens the 3D renderer in a webpage. Make sure pop-ups are enabled, or the page won’t open properly. You can also open the renderer site yourself by replacing [ID here] with the ID that is output by the call to
.setup() at https://data.virtualbrainlab.org/Urchin/?ID=[ID here]
Note that Urchin communicates to the renderer webpage through an internet connection, we don’t currently support offline use (we hope to add support in the future).
#Importing necessary libraries: import oursin as urchin urchin.setup()
(URN) connected to server Login sent with ID: 582f313b, copy this ID into the renderer to connect.
How to create text
To create a group of texts, call the
urchin.text.create(n) function, passing the number of texts as a parameter. The create function returns a list of texts objects, which can then be passed to the singular functions to set the position, color, size (etc) each text individually. The individual objects can then be accessed by list index, ex:
text_list = urchin.text.create(5) #Creating list of 5 text objects
#Sets the text of each object within the list urchin.text.set_texts(text_list, ['top left','bottom left','top right','bottom right','center']) #Sets the positions of the text objects within the list using a 2D coordinate system urchin.text.set_positions(text_list, [[-1,1], [-1, -0.9], [0.85, 1], [0.85, -0.9], [0,0]]) #Sets the font sizes and colors of the text objects urchin.text.set_font_sizes(text_list, 24) # note that single values used in plural functions will be propagated out to the length of the list urchin.text.set_colors(text_list, "#000000")
#Changing top left text to red text_list.set_color("#FF0000")
#Changing size of bottom left text text_list.set_font_size(100)
Note that text annotations can’t currently be capture by screenshots. We’re working on it!