<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://pwiki.pic.es/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Eriksen</id>
	<title>Public PIC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://pwiki.pic.es/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Eriksen"/>
	<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Special:Contributions/Eriksen"/>
	<updated>2026-04-20T13:05:52Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.14</generator>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1297</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1297"/>
		<updated>2025-12-20T14:47:32Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Using a singularity image as a jupyter kernel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;tldr; Connect to https://jupyter.pic.es/ . Enjoy!&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a Singularity image as a Jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
In some projects, the software stack is provided as a Singularity image. In such cases, it can be convenient to use this image directly as a Jupyter kernel, allowing notebooks on jupyter.pic.es to run within the same controlled software environment.&lt;br /&gt;
&lt;br /&gt;
To be used as a Jupyter kernel, the Singularity image must satisfy certain requirements. These depend on the programming language used inside the notebook.&lt;br /&gt;
&lt;br /&gt;
== Python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== Jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== Git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== Variable inspector ==&lt;br /&gt;
Variable Inspector provides an interactive interface for inspecting the current state of variables in a JupyterLab session. It allows users to view variable names, types, shapes, and values in a structured table, facilitating exploratory analysis and debugging workflows similar to variable inspection tools available in environments such as MATLAB.&lt;br /&gt;
&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== Jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor|how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Proper usage of X509 based proxies ==&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1296</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1296"/>
		<updated>2025-12-20T14:39:30Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Variable inspector */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;tldr; Connect to https://jupyter.pic.es/ . Enjoy!&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== Python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== Jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== Git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== Variable inspector ==&lt;br /&gt;
Variable Inspector provides an interactive interface for inspecting the current state of variables in a JupyterLab session. It allows users to view variable names, types, shapes, and values in a structured table, facilitating exploratory analysis and debugging workflows similar to variable inspection tools available in environments such as MATLAB.&lt;br /&gt;
&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== Jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor|how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Proper usage of X509 based proxies ==&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1295</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1295"/>
		<updated>2025-12-20T14:36:34Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;tldr; Connect to https://jupyter.pic.es/ . Enjoy!&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== Python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== Jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== Git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== Variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== Jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor|how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Proper usage of X509 based proxies ==&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1294</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1294"/>
		<updated>2025-12-20T14:35:56Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Known errors */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;tldr; Connect to https://jupyter.pic.es/ . Enjoy!&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== Python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== Jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== Git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== Variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== Jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor|how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Proper usage of X509 based proxies ==&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=CosmoHub&amp;diff=1293</id>
		<title>CosmoHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=CosmoHub&amp;diff=1293"/>
		<updated>2025-12-20T14:06:23Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CosmoHub is a web application based on Hadoop that allows interactive exploration, analysis, and distribution of large cosmological datasets hosted at PIC.&lt;br /&gt;
&lt;br /&gt;
The platform provides web-based access to data products from major cosmology experiments and enables users to perform queries, previews, and data exports without requiring local copies of the full datasets.&lt;br /&gt;
&lt;br /&gt;
To access the service, go to:&lt;br /&gt;
https://cosmohub.pic.es&lt;br /&gt;
&lt;br /&gt;
Please note that, at present, the PIC account and the CosmoHub account are independent. Users must register separately for CosmoHub, even if they already have a PIC account. Work is ongoing to integrate both systems into a single PIC-based login in the future.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=PIC_account&amp;diff=1292</id>
		<title>PIC account</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=PIC_account&amp;diff=1292"/>
		<updated>2025-12-20T14:03:10Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span id=&amp;quot;pic-quick-user-guide&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
= PIC — Quick user guide =&lt;br /&gt;
&lt;br /&gt;
This short guide explains how to register, confirm your account, request access to a group, and what to do after access is granted.&lt;br /&gt;
&lt;br /&gt;
Follow these steps if you are a PIC user and need access to resources managed through the account Experiments page.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;create-your-pic-account&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== 1) Create your PIC account ==&lt;br /&gt;
&lt;br /&gt;
# Open the PIC registration page (the site registration link provided by PIC).&lt;br /&gt;
# Fill the registration form with your name, a PIC-acceptable username (we recommend first letter of the name + surname truncated at 8 chars), and your email.&lt;br /&gt;
# Choose a password and submit the form.&lt;br /&gt;
# You will receive an email with a confirmation link — click the link to activate your account.&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
&lt;br /&gt;
* If you don’t receive the email, check your spam folder and then contact the administrators.&lt;br /&gt;
* Password rules are enforced by the realm; if the site rejects your password, follow the guidance on the form.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;log-in-and-open-account-experiments&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== 2) Log in and open Account → Experiments ==&lt;br /&gt;
&lt;br /&gt;
# Log in to PIC with your new account.&lt;br /&gt;
# Go to &amp;lt;code&amp;gt;Account&amp;lt;/code&amp;gt; and open the &amp;lt;code&amp;gt;Experiments&amp;lt;/code&amp;gt; page (if you used the registration link you should already be there, else navigate to it).&lt;br /&gt;
&lt;br /&gt;
What you can do here:&lt;br /&gt;
&lt;br /&gt;
* Link external IDPs (if offered) for automatic group membership.&lt;br /&gt;
* Request access to groups that are managed manually (non-IDP groups).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;request-access-to-a-group-manual-approval-flow&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== 3) Request access to a group (manual approval flow) ==&lt;br /&gt;
&lt;br /&gt;
# On the Experiments page, find the group you need and click &amp;lt;code&amp;gt;Request Access&amp;lt;/code&amp;gt;.&lt;br /&gt;
# Either:&lt;br /&gt;
#* Select a configured sponsor for the group (if available), or&lt;br /&gt;
#* Enter sponsor name and email in the provided fields.&lt;br /&gt;
# Submit the request.&lt;br /&gt;
&lt;br /&gt;
What happens next:&lt;br /&gt;
&lt;br /&gt;
* If the group is backed by an external IDP, you may be asked to log in to that IDP (e.g., Google) to complete linking.&lt;br /&gt;
* If the group uses manual approval, a signed approval email will be sent to the group’s contact person(s) or the chosen sponsor.&lt;br /&gt;
* Until the request is approved, you can only use your account and public parts of services. Please do not send multiple requests for the same group while waiting for approval.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;after-access-is-granted-try-services&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== 4) After access is granted — try services ==&lt;br /&gt;
&lt;br /&gt;
Once the contact or IDP approval is completed and you are a member of the group:&lt;br /&gt;
&lt;br /&gt;
* You should be able to access group-only web UIs and services.&lt;br /&gt;
* You may be able to SSH to service UIs or jump hosts the PIC team provides (follow PIC-specific SSH instructions).&lt;br /&gt;
* You can open Jupyter (or other service UIs) and run notebooks or jobs according to the permissions granted by your group.&lt;br /&gt;
&lt;br /&gt;
If a resource still seems unavailable after membership is granted, wait a few minutes (background scripts may run) and retry. If the problem persists, contact the administrators with:&lt;br /&gt;
&lt;br /&gt;
* Your username&lt;br /&gt;
* The group name&lt;br /&gt;
* A short description of the problem&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;lost-membership-due-to-inactivity-or-offboarding&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== 5) Lost membership due to inactivity or offboarding ==&lt;br /&gt;
&lt;br /&gt;
For IDP groups we periodically (or on login) verify your membership in the external IDP project.&lt;br /&gt;
&lt;br /&gt;
* If you are no longer a member (inactivity / removal) you will be offboarded and shown a message at login. You can then re-request access via the Experiments page.&lt;br /&gt;
* If you encounter issues re-requesting, contact the administrators.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;grace-period-only-for-idps-that-do-not-grant-offline-access&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Grace period (only for IDPs that DO NOT grant offline access) ===&lt;br /&gt;
&lt;br /&gt;
Some external IDPs do not allow us to request offline tokens. In that case your group may define a &amp;lt;code&amp;gt;gracePeriodDays&amp;lt;/code&amp;gt; value.&lt;br /&gt;
&lt;br /&gt;
What this means for you:&lt;br /&gt;
&lt;br /&gt;
# At the end of each grace period window your membership will be removed automatically and the IDP link will be cleared. You must log back into the external IDP (and if needed re-link or re-request access) to regain membership.&lt;br /&gt;
# During the grace period, if your institution grants you new privileges (e.g. access to new data) and you do not see them reflected in PIC, you can force a refresh by UNLINKING and re-linking the IDP account:&lt;br /&gt;
#* Go to: Account → Account Security → Linked Accounts (Example URL: &amp;lt;code&amp;gt;https://idp-test.pic.es/realms/PIC/account/account-security/linked-accounts&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Click &amp;lt;code&amp;gt;Unlink&amp;lt;/code&amp;gt; for the external IDP.&lt;br /&gt;
#* Confirm (ask the contact if unsure), then link again via the Experiments page or the linking button.&lt;br /&gt;
#* After re-linking, new privileges should appear (may take a short delay while scripts run).&lt;br /&gt;
&lt;br /&gt;
If in doubt before unlinking, ask the group’s contact person. Unlinking does not delete your PIC account; it only removes the connection to that external project until you link again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Lost Membership.png|500px|left]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear: both&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;troubleshooting-common-issues&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;troubleshooting-common-issues&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting &amp;amp;amp; common issues ==&lt;br /&gt;
&lt;br /&gt;
* I didn’t get the confirmation email: check spam, then contact admins.&lt;br /&gt;
* The Experiments page shows no groups: ask your admin if your account has correct attributes or if the group exists.&lt;br /&gt;
* I requested access but nothing happened: contact the group’s contact person or the admin team; provide the request time and the group name.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1291</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1291"/>
		<updated>2025-12-20T14:00:12Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;tldr; Connect to https://jupyter.pic.es/ . Enjoy!&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== Python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== Jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== Git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== Variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== Jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor|how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Faq&amp;diff=1290</id>
		<title>Faq</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Faq&amp;diff=1290"/>
		<updated>2025-12-20T13:44:18Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I reset my password? ==&lt;br /&gt;
You can reset your password using the following link:&lt;br /&gt;
https://www.pic.es/user/auth/forgotpw&lt;br /&gt;
&lt;br /&gt;
== Can an undergraduate student in my group have an account? ==&lt;br /&gt;
Yes. Undergraduate students can have PIC accounts without any problem.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=PIC_description&amp;diff=1289</id>
		<title>PIC description</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=PIC_description&amp;diff=1289"/>
		<updated>2025-12-20T13:40:44Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
The '''Port d’Informació Científica''' ('''PIC''') is a scientific-technological data centre and research infrastructure located on the campus of the Universitat Autònoma de Barcelona (UAB), in Cerdanyola del Vallès, Spain. It is operated through a collaboration agreement between the Institut de Física d’Altes Energies (IFAE) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT). PIC specialises in the storage, processing and analysis of large scientific datasets and provides advanced computing and data services to the national and international research community.&lt;br /&gt;
&lt;br /&gt;
PIC is part of the Spanish Supercomputing Network (RES), a distributed Singular Scientific and Technical Infrastructure (ICTS), and offers high-performance computing resources, data management services, and support for multidisciplinary scientific projects.&lt;br /&gt;
&lt;br /&gt;
PIC plays a key role in data-intensive scientific endeavours including:&lt;br /&gt;
* Serving as a Tier-1 data centre for the Worldwide LHC Computing Grid (WLCG), supporting data processing for the ATLAS, CMS and LHCb experiments at CERN.&lt;br /&gt;
* Hosting the main data centres for the MAGIC gamma-ray telescopes and the PAU Survey instruments.&lt;br /&gt;
* Operating as one of the data centres for the European Space Agency’s '''Euclid''' mission and as a scientific data centre in the mission’s Science Ground Segment.&lt;br /&gt;
* Providing tools and platforms for data exploration and analysis such as CosmoHub, which enables interactive access to large cosmological and astrophysical datasets.&lt;br /&gt;
&lt;br /&gt;
With a team of scientists, engineers and computing experts, PIC’s mission is to accelerate research by enabling effective data-oriented workflows, from interactive development to large-scale production computing and analysis.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=PIC_description&amp;diff=1288</id>
		<title>PIC description</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=PIC_description&amp;diff=1288"/>
		<updated>2025-12-20T13:40:04Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
The '''Port d’Informació Científica''' ('''PIC''') is a scientific-technological data centre and research infrastructure located on the campus of the Universitat Autònoma de Barcelona (UAB), in Cerdanyola del Vallès, Spain. It is operated through a collaboration agreement between the Institut de Física d’Altes Energies (IFAE) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT). PIC specialises in the storage, processing and analysis of large scientific datasets and provides advanced computing and data services to the national and international research community. :contentReference[oaicite:0]{index=0}&lt;br /&gt;
&lt;br /&gt;
PIC is part of the Spanish Supercomputing Network (RES), a distributed Singular Scientific and Technical Infrastructure (ICTS), and offers high-performance computing resources, data management services, and support for multidisciplinary scientific projects. :contentReference[oaicite:1]{index=1}&lt;br /&gt;
&lt;br /&gt;
PIC plays a key role in data-intensive scientific endeavours including:&lt;br /&gt;
* Serving as a Tier-1 data centre for the Worldwide LHC Computing Grid (WLCG), supporting data processing for the ATLAS, CMS and LHCb experiments at CERN. :contentReference[oaicite:2]{index=2}&lt;br /&gt;
* Hosting the main data centres for the MAGIC gamma-ray telescopes and the PAU Survey instruments. :contentReference[oaicite:3]{index=3}&lt;br /&gt;
* Operating as one of the data centres for the European Space Agency’s '''Euclid''' mission and as a scientific data centre in the mission’s Science Ground Segment. :contentReference[oaicite:4]{index=4}&lt;br /&gt;
* Providing tools and platforms for data exploration and analysis such as CosmoHub, which enables interactive access to large cosmological and astrophysical datasets. :contentReference[oaicite:5]{index=5}&lt;br /&gt;
&lt;br /&gt;
With a team of scientists, engineers and computing experts, PIC’s mission is to accelerate research by enabling effective data-oriented workflows, from interactive development to large-scale production computing and analysis. :contentReference[oaicite:6]{index=6}&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=PIC_description&amp;diff=1287</id>
		<title>PIC description</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=PIC_description&amp;diff=1287"/>
		<updated>2025-12-20T13:39:24Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
The '''Port d’Informació Científica''' ('''PIC''') is a scientific-technological data centre and research infrastructure located on the campus of the Universitat Autònoma de Barcelona (UAB), in Cerdanyola del Vallès, Spain. It is operated through a collaboration agreement between the [[Institut de Física d’Altes Energies]] (IFAE) and the [[Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas]] (CIEMAT). PIC specialises in the storage, processing and analysis of large scientific datasets and provides advanced computing and data services to the national and international research community. :contentReference[oaicite:0]{index=0}&lt;br /&gt;
&lt;br /&gt;
PIC is part of the [[Spanish Supercomputing Network]] (RES), a distributed Singular Scientific and Technical Infrastructure (ICTS), and offers high-performance computing resources, data management services, and support for multidisciplinary scientific projects. :contentReference[oaicite:1]{index=1}&lt;br /&gt;
&lt;br /&gt;
PIC plays a key role in data-intensive scientific endeavours including:&lt;br /&gt;
* Serving as a Tier-1 data centre for the Worldwide LHC Computing Grid (WLCG), supporting data processing for the ATLAS, CMS and LHCb experiments at CERN. :contentReference[oaicite:2]{index=2}&lt;br /&gt;
* Hosting the main data centres for the MAGIC gamma-ray telescopes and the PAU Survey instruments. :contentReference[oaicite:3]{index=3}&lt;br /&gt;
* Operating as one of the data centres for the European Space Agency’s '''Euclid''' mission and as a scientific data centre in the mission’s Science Ground Segment. :contentReference[oaicite:4]{index=4}&lt;br /&gt;
* Providing tools and platforms for data exploration and analysis such as CosmoHub, which enables interactive access to large cosmological and astrophysical datasets. :contentReference[oaicite:5]{index=5}&lt;br /&gt;
&lt;br /&gt;
With a team of scientists, engineers and computing experts, PIC’s mission is to accelerate research by enabling effective data-oriented workflows, from interactive development to large-scale production computing and analysis. :contentReference[oaicite:6]{index=6}&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1286</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1286"/>
		<updated>2025-12-20T13:37:32Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Services */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|Introduction to PIC]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[faq| Frequently asked questions]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
=== User interfaces ===&lt;br /&gt;
* [[Login machines]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
&lt;br /&gt;
=== Distributed computing ===&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Dask]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[Hadoop Distributed File System (HDFS)]]&lt;br /&gt;
* [[HDFS Access via VOSpace]]&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Other services ===&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Login_machines&amp;diff=1285</id>
		<title>Login machines</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Login_machines&amp;diff=1285"/>
		<updated>2025-12-20T13:35:03Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The general login machines are '''ui.pic.es'''.  &lt;br /&gt;
These machines provide interactive SSH access to the PIC computing infrastructure and serve as the main entry point for command-line based workflows.&lt;br /&gt;
&lt;br /&gt;
From '''ui.pic.es''' you can:&lt;br /&gt;
* Access the different PIC file systems (HOME directories, project spaces, scratch areas, CVMFS).&lt;br /&gt;
* Submit, monitor, and manage batch jobs through the HTCondor workload management system.&lt;br /&gt;
* Prepare, test, and debug job submission scripts.&lt;br /&gt;
* Inspect job outputs and logs, and manage data transfers.&lt;br /&gt;
&lt;br /&gt;
The login machines are intended for '''interactive work, lightweight testing, and job orchestration only'''.  &lt;br /&gt;
Compute-intensive tasks must be executed through HTCondor or interactive services such as Jupyter, and '''must not''' be run directly on the login nodes.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1284</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1284"/>
		<updated>2025-12-20T13:31:18Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Jupyterlab user guide */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== Python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== Jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== Git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== Variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== Jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor|how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1283</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1283"/>
		<updated>2025-12-20T13:30:35Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* python jupyter kernel in a singularity image */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== Python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor|how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Notebook_htcondor&amp;diff=1282</id>
		<title>Notebook htcondor</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Notebook_htcondor&amp;diff=1282"/>
		<updated>2025-12-20T13:08:19Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Future Improvements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Running Jupyter Notebooks with Multiple Configurations Using HTCondor =&lt;br /&gt;
&lt;br /&gt;
This page documents a practical workflow for running **multiple training configurations of a Jupyter notebook in parallel using HTCondor**.  &lt;br /&gt;
The approach is particularly useful once a model or pipeline is stable and you want to scan over several configurations (e.g. different losses, datasets, or hyperparameters) without running them sequentially.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
&lt;br /&gt;
Neural network development is often done in Jupyter notebooks because they are:&lt;br /&gt;
* Easy to prototype&lt;br /&gt;
* Interactive&lt;br /&gt;
* Good for visual inspection of outputs&lt;br /&gt;
&lt;br /&gt;
However, once development stabilizes, running multiple configurations sequentially can be inefficient.&lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
* 2 loss functions × 4 samples = 8 training runs&lt;br /&gt;
* ~30 minutes per run&lt;br /&gt;
* Sequential execution → ~4 hours&lt;br /&gt;
* Parallel execution with HTCondor → ~30 minutes wall time&lt;br /&gt;
&lt;br /&gt;
This document describes how to:&lt;br /&gt;
# Convert a notebook into a script-friendly format&lt;br /&gt;
# Make it accept command-line arguments&lt;br /&gt;
# Submit multiple runs to HTCondor in parallel&lt;br /&gt;
&lt;br /&gt;
== Overview of the Workflow ==&lt;br /&gt;
&lt;br /&gt;
To run notebook-based training jobs in parallel using HTCondor, you need to:&lt;br /&gt;
&lt;br /&gt;
# Use Jupytext for the notebook&lt;br /&gt;
# Configure the notebook to accept command-line arguments&lt;br /&gt;
# Create an HTCondor submission file ('''.sub''')&lt;br /&gt;
# Submit and monitor the jobs&lt;br /&gt;
&lt;br /&gt;
Each step is described in detail below.&lt;br /&gt;
&lt;br /&gt;
== Step 1: Use Jupytext for the Notebook ==&lt;br /&gt;
&lt;br /&gt;
For HTCondor execution, it is easiest to use **Jupytext notebooks**.&lt;br /&gt;
&lt;br /&gt;
=== Why Jupytext? ===&lt;br /&gt;
&lt;br /&gt;
* The notebook is stored as a '''.py''' file and can be executed directly as a script&lt;br /&gt;
* Still fully usable as a notebook in Jupyter&lt;br /&gt;
* More suitable for version control (Git)&lt;br /&gt;
* Avoids conversion steps when running on batch systems&lt;br /&gt;
&lt;br /&gt;
=== Creating a Jupytext Notebook ===&lt;br /&gt;
&lt;br /&gt;
* From the Jupyter main page, click the option to create a **Jupytext notebook**&lt;br /&gt;
* Select the kernel after creation&lt;br /&gt;
&lt;br /&gt;
=== Opening an Existing Jupytext Notebook ===&lt;br /&gt;
&lt;br /&gt;
* Right-click the file&lt;br /&gt;
* Select '''Open as Jupytext Notebook'''&lt;br /&gt;
&lt;br /&gt;
=== Converting an Existing Notebook ===&lt;br /&gt;
&lt;br /&gt;
If you already have a standard '''.ipynb''' notebook:&lt;br /&gt;
* Create a new Jupytext notebook&lt;br /&gt;
* Copy–paste cells from the old notebook&lt;br /&gt;
* This is often a good opportunity to clean up and refactor the code&lt;br /&gt;
&lt;br /&gt;
== Step 2: Configure the Notebook to Accept Arguments ==&lt;br /&gt;
&lt;br /&gt;
By default, notebooks do not accept command-line arguments.  &lt;br /&gt;
To enable this, add an argument-parsing block at the **top of the notebook/script**.&lt;br /&gt;
&lt;br /&gt;
=== Example Argument Parsing Code ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import argparse&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
def get_args():&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;Get the needed arguments.&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    if not 'launcher' in sys.argv[0]:&lt;br /&gt;
        parser = argparse.ArgumentParser()&lt;br /&gt;
        parser.add_argument(&amp;quot;--sample&amp;quot;, type=int, required=True)&lt;br /&gt;
        parser.add_argument(&amp;quot;--rotloss&amp;quot;, type=bool, required=True)&lt;br /&gt;
&lt;br /&gt;
        args = parser.parse_args()&lt;br /&gt;
        isample = args.sample&lt;br /&gt;
        rot_loss = args.rotloss&lt;br /&gt;
    else:&lt;br /&gt;
        # Default values when running interactively as a notebook&lt;br /&gt;
        isample = 0&lt;br /&gt;
        rot_loss = False&lt;br /&gt;
&lt;br /&gt;
    return isample, rot_loss&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
&lt;br /&gt;
When executed as a script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./train_encoder.py --sample 0 --rotloss False&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When executed as a notebook:&lt;br /&gt;
* Default values are used&lt;br /&gt;
* No command-line arguments are required&lt;br /&gt;
&lt;br /&gt;
This allows the same file to work both:&lt;br /&gt;
* Interactively (Jupyter)&lt;br /&gt;
* Non-interactively (HTCondor)&lt;br /&gt;
&lt;br /&gt;
== Step 3: Create an HTCondor Submission File ==&lt;br /&gt;
&lt;br /&gt;
Create a submission file (e.g. '''autoenc.sub''') describing how the jobs should run.&lt;br /&gt;
&lt;br /&gt;
=== Example Submission File ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
executable = /data/incaem/scratch_nvme/eriksen/miniforge3/envs/py4dstem/bin/python&lt;br /&gt;
arguments = /nfs/pic.es/user/e/eriksen/proj/posthack/train_encoder.py --sample $(sample) --rotloss False&lt;br /&gt;
&lt;br /&gt;
# Logs&lt;br /&gt;
output          = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).out&lt;br /&gt;
error           = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).err&lt;br /&gt;
log             = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).log&lt;br /&gt;
&lt;br /&gt;
request_gpus    = 1&lt;br /&gt;
request_memory  = 8GB&lt;br /&gt;
&lt;br /&gt;
queue sample in (0 1 2 3)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
&lt;br /&gt;
* Use the **full path** to:&lt;br /&gt;
** The Python executable from the desired conda environment&lt;br /&gt;
** The training script&lt;br /&gt;
* HTCondor variables (e.g. '''$(sample)''') are passed as arguments&lt;br /&gt;
* Each queued value corresponds to one independent training run&lt;br /&gt;
* Logs are separated per job&lt;br /&gt;
&lt;br /&gt;
== Step 4: Submit and Monitor the Jobs ==&lt;br /&gt;
&lt;br /&gt;
=== Submit ===&lt;br /&gt;
&lt;br /&gt;
From the directory containing the '''.sub''' file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
condor_submit autoenc.sub&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Monitor ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
condor_q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or use standard HTCondor monitoring tools as needed.&lt;br /&gt;
&lt;br /&gt;
=== Important Notes ===&lt;br /&gt;
&lt;br /&gt;
* Always write outputs (e.g. model weights, checkpoints) to an '''absolute path'''&lt;br /&gt;
* Ensure output directories exist before submission&lt;br /&gt;
* Avoid relying on notebook state (each job runs independently)&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This workflow enables:&lt;br /&gt;
* Notebook-based development&lt;br /&gt;
* Script-based batch execution&lt;br /&gt;
* Efficient parallel training with HTCondor&lt;br /&gt;
&lt;br /&gt;
It is not perfect, but it is a practical and robust solution for scaling notebook-based workflows once development stabilizes.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Notebook_htcondor&amp;diff=1281</id>
		<title>Notebook htcondor</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Notebook_htcondor&amp;diff=1281"/>
		<updated>2025-12-20T13:07:54Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Example Submission File */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Running Jupyter Notebooks with Multiple Configurations Using HTCondor =&lt;br /&gt;
&lt;br /&gt;
This page documents a practical workflow for running **multiple training configurations of a Jupyter notebook in parallel using HTCondor**.  &lt;br /&gt;
The approach is particularly useful once a model or pipeline is stable and you want to scan over several configurations (e.g. different losses, datasets, or hyperparameters) without running them sequentially.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
&lt;br /&gt;
Neural network development is often done in Jupyter notebooks because they are:&lt;br /&gt;
* Easy to prototype&lt;br /&gt;
* Interactive&lt;br /&gt;
* Good for visual inspection of outputs&lt;br /&gt;
&lt;br /&gt;
However, once development stabilizes, running multiple configurations sequentially can be inefficient.&lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
* 2 loss functions × 4 samples = 8 training runs&lt;br /&gt;
* ~30 minutes per run&lt;br /&gt;
* Sequential execution → ~4 hours&lt;br /&gt;
* Parallel execution with HTCondor → ~30 minutes wall time&lt;br /&gt;
&lt;br /&gt;
This document describes how to:&lt;br /&gt;
# Convert a notebook into a script-friendly format&lt;br /&gt;
# Make it accept command-line arguments&lt;br /&gt;
# Submit multiple runs to HTCondor in parallel&lt;br /&gt;
&lt;br /&gt;
== Overview of the Workflow ==&lt;br /&gt;
&lt;br /&gt;
To run notebook-based training jobs in parallel using HTCondor, you need to:&lt;br /&gt;
&lt;br /&gt;
# Use Jupytext for the notebook&lt;br /&gt;
# Configure the notebook to accept command-line arguments&lt;br /&gt;
# Create an HTCondor submission file ('''.sub''')&lt;br /&gt;
# Submit and monitor the jobs&lt;br /&gt;
&lt;br /&gt;
Each step is described in detail below.&lt;br /&gt;
&lt;br /&gt;
== Step 1: Use Jupytext for the Notebook ==&lt;br /&gt;
&lt;br /&gt;
For HTCondor execution, it is easiest to use **Jupytext notebooks**.&lt;br /&gt;
&lt;br /&gt;
=== Why Jupytext? ===&lt;br /&gt;
&lt;br /&gt;
* The notebook is stored as a '''.py''' file and can be executed directly as a script&lt;br /&gt;
* Still fully usable as a notebook in Jupyter&lt;br /&gt;
* More suitable for version control (Git)&lt;br /&gt;
* Avoids conversion steps when running on batch systems&lt;br /&gt;
&lt;br /&gt;
=== Creating a Jupytext Notebook ===&lt;br /&gt;
&lt;br /&gt;
* From the Jupyter main page, click the option to create a **Jupytext notebook**&lt;br /&gt;
* Select the kernel after creation&lt;br /&gt;
&lt;br /&gt;
=== Opening an Existing Jupytext Notebook ===&lt;br /&gt;
&lt;br /&gt;
* Right-click the file&lt;br /&gt;
* Select '''Open as Jupytext Notebook'''&lt;br /&gt;
&lt;br /&gt;
=== Converting an Existing Notebook ===&lt;br /&gt;
&lt;br /&gt;
If you already have a standard '''.ipynb''' notebook:&lt;br /&gt;
* Create a new Jupytext notebook&lt;br /&gt;
* Copy–paste cells from the old notebook&lt;br /&gt;
* This is often a good opportunity to clean up and refactor the code&lt;br /&gt;
&lt;br /&gt;
== Step 2: Configure the Notebook to Accept Arguments ==&lt;br /&gt;
&lt;br /&gt;
By default, notebooks do not accept command-line arguments.  &lt;br /&gt;
To enable this, add an argument-parsing block at the **top of the notebook/script**.&lt;br /&gt;
&lt;br /&gt;
=== Example Argument Parsing Code ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import argparse&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
def get_args():&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;Get the needed arguments.&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    if not 'launcher' in sys.argv[0]:&lt;br /&gt;
        parser = argparse.ArgumentParser()&lt;br /&gt;
        parser.add_argument(&amp;quot;--sample&amp;quot;, type=int, required=True)&lt;br /&gt;
        parser.add_argument(&amp;quot;--rotloss&amp;quot;, type=bool, required=True)&lt;br /&gt;
&lt;br /&gt;
        args = parser.parse_args()&lt;br /&gt;
        isample = args.sample&lt;br /&gt;
        rot_loss = args.rotloss&lt;br /&gt;
    else:&lt;br /&gt;
        # Default values when running interactively as a notebook&lt;br /&gt;
        isample = 0&lt;br /&gt;
        rot_loss = False&lt;br /&gt;
&lt;br /&gt;
    return isample, rot_loss&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
&lt;br /&gt;
When executed as a script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./train_encoder.py --sample 0 --rotloss False&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When executed as a notebook:&lt;br /&gt;
* Default values are used&lt;br /&gt;
* No command-line arguments are required&lt;br /&gt;
&lt;br /&gt;
This allows the same file to work both:&lt;br /&gt;
* Interactively (Jupyter)&lt;br /&gt;
* Non-interactively (HTCondor)&lt;br /&gt;
&lt;br /&gt;
== Step 3: Create an HTCondor Submission File ==&lt;br /&gt;
&lt;br /&gt;
Create a submission file (e.g. '''autoenc.sub''') describing how the jobs should run.&lt;br /&gt;
&lt;br /&gt;
=== Example Submission File ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
executable = /data/incaem/scratch_nvme/eriksen/miniforge3/envs/py4dstem/bin/python&lt;br /&gt;
arguments = /nfs/pic.es/user/e/eriksen/proj/posthack/train_encoder.py --sample $(sample) --rotloss False&lt;br /&gt;
&lt;br /&gt;
# Logs&lt;br /&gt;
output          = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).out&lt;br /&gt;
error           = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).err&lt;br /&gt;
log             = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).log&lt;br /&gt;
&lt;br /&gt;
request_gpus    = 1&lt;br /&gt;
request_memory  = 8GB&lt;br /&gt;
&lt;br /&gt;
queue sample in (0 1 2 3)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
&lt;br /&gt;
* Use the **full path** to:&lt;br /&gt;
** The Python executable from the desired conda environment&lt;br /&gt;
** The training script&lt;br /&gt;
* HTCondor variables (e.g. '''$(sample)''') are passed as arguments&lt;br /&gt;
* Each queued value corresponds to one independent training run&lt;br /&gt;
* Logs are separated per job&lt;br /&gt;
&lt;br /&gt;
== Step 4: Submit and Monitor the Jobs ==&lt;br /&gt;
&lt;br /&gt;
=== Submit ===&lt;br /&gt;
&lt;br /&gt;
From the directory containing the '''.sub''' file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
condor_submit autoenc.sub&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Monitor ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
condor_q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or use standard HTCondor monitoring tools as needed.&lt;br /&gt;
&lt;br /&gt;
=== Important Notes ===&lt;br /&gt;
&lt;br /&gt;
* Always write outputs (e.g. model weights, checkpoints) to an '''absolute path'''&lt;br /&gt;
* Ensure output directories exist before submission&lt;br /&gt;
* Avoid relying on notebook state (each job runs independently)&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This workflow enables:&lt;br /&gt;
* Notebook-based development&lt;br /&gt;
* Script-based batch execution&lt;br /&gt;
* Efficient parallel training with HTCondor&lt;br /&gt;
&lt;br /&gt;
It is not perfect, but it is a practical and robust solution for scaling notebook-based workflows once development stabilizes.&lt;br /&gt;
&lt;br /&gt;
== Future Improvements ==&lt;br /&gt;
&lt;br /&gt;
* Centralized documentation&lt;br /&gt;
* Shared templates for submission files&lt;br /&gt;
* Standard argument-handling utilities&lt;br /&gt;
* Automated notebook-to-batch conversion tools&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Author: Martin&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Notebook_htcondor&amp;diff=1280</id>
		<title>Notebook htcondor</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Notebook_htcondor&amp;diff=1280"/>
		<updated>2025-12-20T13:07:33Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Example Argument Parsing Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Running Jupyter Notebooks with Multiple Configurations Using HTCondor =&lt;br /&gt;
&lt;br /&gt;
This page documents a practical workflow for running **multiple training configurations of a Jupyter notebook in parallel using HTCondor**.  &lt;br /&gt;
The approach is particularly useful once a model or pipeline is stable and you want to scan over several configurations (e.g. different losses, datasets, or hyperparameters) without running them sequentially.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
&lt;br /&gt;
Neural network development is often done in Jupyter notebooks because they are:&lt;br /&gt;
* Easy to prototype&lt;br /&gt;
* Interactive&lt;br /&gt;
* Good for visual inspection of outputs&lt;br /&gt;
&lt;br /&gt;
However, once development stabilizes, running multiple configurations sequentially can be inefficient.&lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
* 2 loss functions × 4 samples = 8 training runs&lt;br /&gt;
* ~30 minutes per run&lt;br /&gt;
* Sequential execution → ~4 hours&lt;br /&gt;
* Parallel execution with HTCondor → ~30 minutes wall time&lt;br /&gt;
&lt;br /&gt;
This document describes how to:&lt;br /&gt;
# Convert a notebook into a script-friendly format&lt;br /&gt;
# Make it accept command-line arguments&lt;br /&gt;
# Submit multiple runs to HTCondor in parallel&lt;br /&gt;
&lt;br /&gt;
== Overview of the Workflow ==&lt;br /&gt;
&lt;br /&gt;
To run notebook-based training jobs in parallel using HTCondor, you need to:&lt;br /&gt;
&lt;br /&gt;
# Use Jupytext for the notebook&lt;br /&gt;
# Configure the notebook to accept command-line arguments&lt;br /&gt;
# Create an HTCondor submission file ('''.sub''')&lt;br /&gt;
# Submit and monitor the jobs&lt;br /&gt;
&lt;br /&gt;
Each step is described in detail below.&lt;br /&gt;
&lt;br /&gt;
== Step 1: Use Jupytext for the Notebook ==&lt;br /&gt;
&lt;br /&gt;
For HTCondor execution, it is easiest to use **Jupytext notebooks**.&lt;br /&gt;
&lt;br /&gt;
=== Why Jupytext? ===&lt;br /&gt;
&lt;br /&gt;
* The notebook is stored as a '''.py''' file and can be executed directly as a script&lt;br /&gt;
* Still fully usable as a notebook in Jupyter&lt;br /&gt;
* More suitable for version control (Git)&lt;br /&gt;
* Avoids conversion steps when running on batch systems&lt;br /&gt;
&lt;br /&gt;
=== Creating a Jupytext Notebook ===&lt;br /&gt;
&lt;br /&gt;
* From the Jupyter main page, click the option to create a **Jupytext notebook**&lt;br /&gt;
* Select the kernel after creation&lt;br /&gt;
&lt;br /&gt;
=== Opening an Existing Jupytext Notebook ===&lt;br /&gt;
&lt;br /&gt;
* Right-click the file&lt;br /&gt;
* Select '''Open as Jupytext Notebook'''&lt;br /&gt;
&lt;br /&gt;
=== Converting an Existing Notebook ===&lt;br /&gt;
&lt;br /&gt;
If you already have a standard '''.ipynb''' notebook:&lt;br /&gt;
* Create a new Jupytext notebook&lt;br /&gt;
* Copy–paste cells from the old notebook&lt;br /&gt;
* This is often a good opportunity to clean up and refactor the code&lt;br /&gt;
&lt;br /&gt;
== Step 2: Configure the Notebook to Accept Arguments ==&lt;br /&gt;
&lt;br /&gt;
By default, notebooks do not accept command-line arguments.  &lt;br /&gt;
To enable this, add an argument-parsing block at the **top of the notebook/script**.&lt;br /&gt;
&lt;br /&gt;
=== Example Argument Parsing Code ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import argparse&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
def get_args():&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;Get the needed arguments.&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    if not 'launcher' in sys.argv[0]:&lt;br /&gt;
        parser = argparse.ArgumentParser()&lt;br /&gt;
        parser.add_argument(&amp;quot;--sample&amp;quot;, type=int, required=True)&lt;br /&gt;
        parser.add_argument(&amp;quot;--rotloss&amp;quot;, type=bool, required=True)&lt;br /&gt;
&lt;br /&gt;
        args = parser.parse_args()&lt;br /&gt;
        isample = args.sample&lt;br /&gt;
        rot_loss = args.rotloss&lt;br /&gt;
    else:&lt;br /&gt;
        # Default values when running interactively as a notebook&lt;br /&gt;
        isample = 0&lt;br /&gt;
        rot_loss = False&lt;br /&gt;
&lt;br /&gt;
    return isample, rot_loss&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
&lt;br /&gt;
When executed as a script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./train_encoder.py --sample 0 --rotloss False&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When executed as a notebook:&lt;br /&gt;
* Default values are used&lt;br /&gt;
* No command-line arguments are required&lt;br /&gt;
&lt;br /&gt;
This allows the same file to work both:&lt;br /&gt;
* Interactively (Jupyter)&lt;br /&gt;
* Non-interactively (HTCondor)&lt;br /&gt;
&lt;br /&gt;
== Step 3: Create an HTCondor Submission File ==&lt;br /&gt;
&lt;br /&gt;
Create a submission file (e.g. '''autoenc.sub''') describing how the jobs should run.&lt;br /&gt;
&lt;br /&gt;
=== Example Submission File ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
executable = /data/incaem/scratch_nvme/eriksen/miniforge3/envs/py4dstem/bin/python&lt;br /&gt;
arguments = /nfs/pic.es/user/e/eriksen/proj/posthack/train_encoder.py --sample $(sample) --rotloss False&lt;br /&gt;
&lt;br /&gt;
# Logs&lt;br /&gt;
output          = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).out&lt;br /&gt;
error           = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).err&lt;br /&gt;
log             = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).log&lt;br /&gt;
&lt;br /&gt;
request_gpus    = 1&lt;br /&gt;
request_memory  = 8GB&lt;br /&gt;
&lt;br /&gt;
queue sample in (0 1 2 3)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
&lt;br /&gt;
* Use the **full path** to:&lt;br /&gt;
** The Python executable from the desired conda environment&lt;br /&gt;
** The training script&lt;br /&gt;
* HTCondor variables (e.g. '''$(sample)''') are passed as arguments&lt;br /&gt;
* Each queued value corresponds to one independent training run&lt;br /&gt;
* Logs are separated per job&lt;br /&gt;
&lt;br /&gt;
== Step 4: Submit and Monitor the Jobs ==&lt;br /&gt;
&lt;br /&gt;
=== Submit ===&lt;br /&gt;
&lt;br /&gt;
From the directory containing the '''.sub''' file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
condor_submit autoenc.sub&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Monitor ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
condor_q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or use standard HTCondor monitoring tools as needed.&lt;br /&gt;
&lt;br /&gt;
=== Important Notes ===&lt;br /&gt;
&lt;br /&gt;
* Always write outputs (e.g. model weights, checkpoints) to an '''absolute path'''&lt;br /&gt;
* Ensure output directories exist before submission&lt;br /&gt;
* Avoid relying on notebook state (each job runs independently)&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This workflow enables:&lt;br /&gt;
* Notebook-based development&lt;br /&gt;
* Script-based batch execution&lt;br /&gt;
* Efficient parallel training with HTCondor&lt;br /&gt;
&lt;br /&gt;
It is not perfect, but it is a practical and robust solution for scaling notebook-based workflows once development stabilizes.&lt;br /&gt;
&lt;br /&gt;
== Future Improvements ==&lt;br /&gt;
&lt;br /&gt;
* Centralized documentation&lt;br /&gt;
* Shared templates for submission files&lt;br /&gt;
* Standard argument-handling utilities&lt;br /&gt;
* Automated notebook-to-batch conversion tools&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Author: Martin&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Notebook_htcondor&amp;diff=1279</id>
		<title>Notebook htcondor</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Notebook_htcondor&amp;diff=1279"/>
		<updated>2025-12-20T13:05:58Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: Created page with &amp;quot;= Running Jupyter Notebooks with Multiple Configurations Using HTCondor =  This page documents a practical workflow for running **multiple training configurations of a Jupyter...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Running Jupyter Notebooks with Multiple Configurations Using HTCondor =&lt;br /&gt;
&lt;br /&gt;
This page documents a practical workflow for running **multiple training configurations of a Jupyter notebook in parallel using HTCondor**.  &lt;br /&gt;
The approach is particularly useful once a model or pipeline is stable and you want to scan over several configurations (e.g. different losses, datasets, or hyperparameters) without running them sequentially.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
&lt;br /&gt;
Neural network development is often done in Jupyter notebooks because they are:&lt;br /&gt;
* Easy to prototype&lt;br /&gt;
* Interactive&lt;br /&gt;
* Good for visual inspection of outputs&lt;br /&gt;
&lt;br /&gt;
However, once development stabilizes, running multiple configurations sequentially can be inefficient.&lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
* 2 loss functions × 4 samples = 8 training runs&lt;br /&gt;
* ~30 minutes per run&lt;br /&gt;
* Sequential execution → ~4 hours&lt;br /&gt;
* Parallel execution with HTCondor → ~30 minutes wall time&lt;br /&gt;
&lt;br /&gt;
This document describes how to:&lt;br /&gt;
# Convert a notebook into a script-friendly format&lt;br /&gt;
# Make it accept command-line arguments&lt;br /&gt;
# Submit multiple runs to HTCondor in parallel&lt;br /&gt;
&lt;br /&gt;
== Overview of the Workflow ==&lt;br /&gt;
&lt;br /&gt;
To run notebook-based training jobs in parallel using HTCondor, you need to:&lt;br /&gt;
&lt;br /&gt;
# Use Jupytext for the notebook&lt;br /&gt;
# Configure the notebook to accept command-line arguments&lt;br /&gt;
# Create an HTCondor submission file ('''.sub''')&lt;br /&gt;
# Submit and monitor the jobs&lt;br /&gt;
&lt;br /&gt;
Each step is described in detail below.&lt;br /&gt;
&lt;br /&gt;
== Step 1: Use Jupytext for the Notebook ==&lt;br /&gt;
&lt;br /&gt;
For HTCondor execution, it is easiest to use **Jupytext notebooks**.&lt;br /&gt;
&lt;br /&gt;
=== Why Jupytext? ===&lt;br /&gt;
&lt;br /&gt;
* The notebook is stored as a '''.py''' file and can be executed directly as a script&lt;br /&gt;
* Still fully usable as a notebook in Jupyter&lt;br /&gt;
* More suitable for version control (Git)&lt;br /&gt;
* Avoids conversion steps when running on batch systems&lt;br /&gt;
&lt;br /&gt;
=== Creating a Jupytext Notebook ===&lt;br /&gt;
&lt;br /&gt;
* From the Jupyter main page, click the option to create a **Jupytext notebook**&lt;br /&gt;
* Select the kernel after creation&lt;br /&gt;
&lt;br /&gt;
=== Opening an Existing Jupytext Notebook ===&lt;br /&gt;
&lt;br /&gt;
* Right-click the file&lt;br /&gt;
* Select '''Open as Jupytext Notebook'''&lt;br /&gt;
&lt;br /&gt;
=== Converting an Existing Notebook ===&lt;br /&gt;
&lt;br /&gt;
If you already have a standard '''.ipynb''' notebook:&lt;br /&gt;
* Create a new Jupytext notebook&lt;br /&gt;
* Copy–paste cells from the old notebook&lt;br /&gt;
* This is often a good opportunity to clean up and refactor the code&lt;br /&gt;
&lt;br /&gt;
== Step 2: Configure the Notebook to Accept Arguments ==&lt;br /&gt;
&lt;br /&gt;
By default, notebooks do not accept command-line arguments.  &lt;br /&gt;
To enable this, add an argument-parsing block at the **top of the notebook/script**.&lt;br /&gt;
&lt;br /&gt;
=== Example Argument Parsing Code ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
import argparse&lt;br /&gt;
import sys&lt;br /&gt;
&lt;br /&gt;
def get_args():&lt;br /&gt;
    &amp;quot;&amp;quot;&amp;quot;Get the needed arguments.&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    if not 'launcher' in sys.argv[0]:&lt;br /&gt;
        parser = argparse.ArgumentParser()&lt;br /&gt;
        parser.add_argument(&amp;quot;--sample&amp;quot;, type=int, required=True)&lt;br /&gt;
        parser.add_argument(&amp;quot;--rotloss&amp;quot;, type=bool, required=True)&lt;br /&gt;
&lt;br /&gt;
        args = parser.parse_args()&lt;br /&gt;
        isample = args.sample&lt;br /&gt;
        rot_loss = args.rotloss&lt;br /&gt;
    else:&lt;br /&gt;
        # Default values when running interactively as a notebook&lt;br /&gt;
        isample = 0&lt;br /&gt;
        rot_loss = False&lt;br /&gt;
&lt;br /&gt;
    return isample, rot_loss&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
&lt;br /&gt;
When executed as a script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./train_encoder.py --sample 0 --rotloss False&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When executed as a notebook:&lt;br /&gt;
* Default values are used&lt;br /&gt;
* No command-line arguments are required&lt;br /&gt;
&lt;br /&gt;
This allows the same file to work both:&lt;br /&gt;
* Interactively (Jupyter)&lt;br /&gt;
* Non-interactively (HTCondor)&lt;br /&gt;
&lt;br /&gt;
== Step 3: Create an HTCondor Submission File ==&lt;br /&gt;
&lt;br /&gt;
Create a submission file (e.g. '''autoenc.sub''') describing how the jobs should run.&lt;br /&gt;
&lt;br /&gt;
=== Example Submission File ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
executable = /data/incaem/scratch_nvme/eriksen/miniforge3/envs/py4dstem/bin/python&lt;br /&gt;
arguments = /nfs/pic.es/user/e/eriksen/proj/posthack/train_encoder.py --sample $(sample) --rotloss False&lt;br /&gt;
&lt;br /&gt;
# Logs&lt;br /&gt;
output          = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).out&lt;br /&gt;
error           = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).err&lt;br /&gt;
log             = logs/train_encoder_$(sample)_$(ClusterId).$(ProcId).log&lt;br /&gt;
&lt;br /&gt;
request_gpus    = 1&lt;br /&gt;
request_memory  = 8GB&lt;br /&gt;
&lt;br /&gt;
queue sample in (0 1 2 3)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
&lt;br /&gt;
* Use the **full path** to:&lt;br /&gt;
** The Python executable from the desired conda environment&lt;br /&gt;
** The training script&lt;br /&gt;
* HTCondor variables (e.g. '''$(sample)''') are passed as arguments&lt;br /&gt;
* Each queued value corresponds to one independent training run&lt;br /&gt;
* Logs are separated per job&lt;br /&gt;
&lt;br /&gt;
== Step 4: Submit and Monitor the Jobs ==&lt;br /&gt;
&lt;br /&gt;
=== Submit ===&lt;br /&gt;
&lt;br /&gt;
From the directory containing the '''.sub''' file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
condor_submit autoenc.sub&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Monitor ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
condor_q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or use standard HTCondor monitoring tools as needed.&lt;br /&gt;
&lt;br /&gt;
=== Important Notes ===&lt;br /&gt;
&lt;br /&gt;
* Always write outputs (e.g. model weights, checkpoints) to an '''absolute path'''&lt;br /&gt;
* Ensure output directories exist before submission&lt;br /&gt;
* Avoid relying on notebook state (each job runs independently)&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This workflow enables:&lt;br /&gt;
* Notebook-based development&lt;br /&gt;
* Script-based batch execution&lt;br /&gt;
* Efficient parallel training with HTCondor&lt;br /&gt;
&lt;br /&gt;
It is not perfect, but it is a practical and robust solution for scaling notebook-based workflows once development stabilizes.&lt;br /&gt;
&lt;br /&gt;
== Future Improvements ==&lt;br /&gt;
&lt;br /&gt;
* Centralized documentation&lt;br /&gt;
* Shared templates for submission files&lt;br /&gt;
* Standard argument-handling utilities&lt;br /&gt;
* Automated notebook-to-batch conversion tools&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Author: Martin&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1278</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1278"/>
		<updated>2025-12-20T13:05:29Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Running notebooks through HTCondor */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor|how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1277</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1277"/>
		<updated>2025-12-20T13:05:16Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[[notebook_htcondor[how to run a notebook through HTCondor.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1276</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1276"/>
		<updated>2025-12-20T13:04:46Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Running notebooks through HTCondor =&lt;br /&gt;
After developing a notebook, you might want to run it with different configurations. The&lt;br /&gt;
following documentation explains  &lt;br /&gt;
&lt;br /&gt;
[notebook_htcondor[how to run a notebook through HTCondor.]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=ICFO&amp;diff=1275</id>
		<title>ICFO</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=ICFO&amp;diff=1275"/>
		<updated>2025-11-17T10:26:34Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Image viewers ==&lt;br /&gt;
&lt;br /&gt;
There is some limited support for running graphical applications at&lt;br /&gt;
PIC. First you need to log into JupyterLab (jupyter.pic.es) and click&lt;br /&gt;
on the virtual desktop icon (orange D). This will open a graphical &lt;br /&gt;
desktop with a single terminal, from where you can launch the image&lt;br /&gt;
viewers.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_2025-02-05_at_10.48.54.png|700px|Desktop]]&lt;br /&gt;
&lt;br /&gt;
=== Fiji ===&lt;br /&gt;
&lt;br /&gt;
Log into the desktop (see above) and run the following command:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
/data/icfo/software/Fiji.app/ImageJ-linux64&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Napari ===&lt;br /&gt;
&lt;br /&gt;
Log into the desktop (see above) and run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
/data/icfo/software/bin/run_napari&lt;br /&gt;
&lt;br /&gt;
== Webdav door ===&lt;br /&gt;
For getting access to files in the archive: https://webdav-icfo.pic.es/&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1257</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1257"/>
		<updated>2025-10-02T11:53:50Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|Introduction to PIC]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[faq| Frequently asked questions]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
=== Distributed computing ===&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Dask]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[Hadoop Distributed File System (HDFS)]]&lt;br /&gt;
* [[HDFS Access via VOSpace]]&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
=== User interfaces ===&lt;br /&gt;
* [[Login machines]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
&lt;br /&gt;
=== Other services ===&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1256</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1256"/>
		<updated>2025-10-02T11:52:30Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Getting started */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|Introduction to PIC]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[faq| Frequently asked questions]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
=== Distributed computing ===&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Dask]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[Hadoop Distributed File System (HDFS)]]&lt;br /&gt;
* [[HDFS Access via VOSpace]]&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
=== User interfaces ===&lt;br /&gt;
* [[Login machines]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
&lt;br /&gt;
=== Other services ===&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;br /&gt;
&lt;br /&gt;
== More technical information ==&lt;br /&gt;
&lt;br /&gt;
* [[Storage Department]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1255</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1255"/>
		<updated>2025-10-02T11:50:37Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
= Known errors =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1254</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1254"/>
		<updated>2025-10-02T11:48:08Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
== Virtual desktop ==&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Prebuilt environments == &lt;br /&gt;
&lt;br /&gt;
PIC's jupyterhub service comes with a collection of prebuilt environments located at '''/data/jupyter/software/envs'''.&lt;br /&gt;
&lt;br /&gt;
The master environment located at '''/data/jupyter/software/envs/master''' is the one used to start the jupyterlab service and the default for new notebooks.&lt;br /&gt;
&lt;br /&gt;
This is a non-extensive list of the packages included:&lt;br /&gt;
  - astropy=6.1.0&lt;br /&gt;
  - bokeh=3.4.1&lt;br /&gt;
  - dash=2.17.0&lt;br /&gt;
  - dask=2024.5.1&lt;br /&gt;
  - findspark=2.0.1&lt;br /&gt;
  - matplotlib=3.8.4&lt;br /&gt;
  - numpy=1.26.4&lt;br /&gt;
  - pandas=2.2.2&lt;br /&gt;
  - pillow=10.3.0&lt;br /&gt;
  - plotly=5.22.0&lt;br /&gt;
  - pyhive=0.7.0&lt;br /&gt;
  - python=3.12&lt;br /&gt;
  - pywavelets=1.4.1&lt;br /&gt;
  - scikit-image=0.22.0&lt;br /&gt;
  - scikit-learn=1.5.0&lt;br /&gt;
  - scipy=1.13.1&lt;br /&gt;
  - seaborn=0.13.2&lt;br /&gt;
  - statsmodels=0.14.2&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Login_machines&amp;diff=1236</id>
		<title>Login machines</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Login_machines&amp;diff=1236"/>
		<updated>2025-07-14T11:44:24Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: Created page with &amp;quot;The general login machines are ui.pic.es.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The general login machines are ui.pic.es.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1235</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1235"/>
		<updated>2025-07-14T11:39:44Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Services */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|PIC in an image]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[faq| Frequently asked questions]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
=== Distributed computing ===&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Dask]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[Hadoop Distributed File System (HDFS)]]&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
=== User interfaces ===&lt;br /&gt;
* [[Login machines]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
&lt;br /&gt;
=== Other services ===&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;br /&gt;
&lt;br /&gt;
== More technical information ==&lt;br /&gt;
&lt;br /&gt;
* [[Storage Department]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1234</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1234"/>
		<updated>2025-07-14T11:37:51Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of mamba/micromamba) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/conda/mamba''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba/micromamba installation, there are two recommended options&lt;br /&gt;
** '''miniforge''': a distribution with conda and mamba executables in a minimal base environment, instructions [https://github.com/conda-forge/miniforge here]&lt;br /&gt;
** '''micromamba''': a self-contained executable (micromamba) with no base environment, instructions [https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html here]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal.&lt;br /&gt;
&lt;br /&gt;
In order to use conda/mamba/micromamba you need to intialize the shell. This initialization can be persistent, which will do some changes to your '''~/.bashrc''' file, or you can do it every time you want to use it.&lt;br /&gt;
&lt;br /&gt;
Run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
eval &amp;quot;$(/data/astro/software/miniforge3/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, if you are using miniforge and you want to persist the initialization:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/data/astro/software/miniforge3/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
  pkgs_dirs:&lt;br /&gt;
  - /data/aai/scratch_ssd/torradeflot/pkgs&lt;br /&gt;
  - /data/aai/scratch/torradeflot/pkgs&lt;br /&gt;
  - /data/pic/scratch/torradeflot/pkgs&lt;br /&gt;
&lt;br /&gt;
if `pkgs_dirs` and `envs_dirs` are in the same storage, conda will use hard links, thus optimizing the disk space.&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
Dask supports parallel computations in Python. The PIC Jupyterlab has an extension for launching&lt;br /&gt;
your own Dask clusters. For more information, see [[Dask|Dask documentation]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
== jupyter server proxy ==&lt;br /&gt;
&lt;br /&gt;
This extension is installed in PIC's jupyter environment and it is used to be able to access network/web services running on the same host as the jupyterlab server from outside through the &amp;quot;https://jupyter.pic.es/user/{username}/proxy/{port}&amp;quot; URL.&lt;br /&gt;
&lt;br /&gt;
Full documentation here: https://jupyter-server-proxy.readthedocs.io/en/latest/index.html&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== Error 500: Internal Server Error ==&lt;br /&gt;
&lt;br /&gt;
This is a generic error. Means that the jupyterlab server failed. This could be for different reasons:&lt;br /&gt;
&lt;br /&gt;
* Your HOME folder is full. Log in to &amp;quot;ui.pic.es&amp;quot; and run &amp;quot;quota&amp;quot; to check the usage vs quota. If it is full you'll have to free up space.&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;br /&gt;
&lt;br /&gt;
== Spawn failed: The 'ip' trait of a PICCondorSpawner instance expected a unicode string, not the NoneType None ==&lt;br /&gt;
&lt;br /&gt;
Jupyterhub could not get the host name from HTCondor's stdout, because it didn't match the expected regular expression.&lt;br /&gt;
&lt;br /&gt;
This error happens randomly from time to time, it does not imply any major problem in any of the services.&lt;br /&gt;
&lt;br /&gt;
Try to request a new notebook server.&lt;br /&gt;
&lt;br /&gt;
== 403 : Forbidden. XSRF cookie does not match POST argument ==&lt;br /&gt;
&lt;br /&gt;
The value of the &amp;quot;_xsrf&amp;quot; cookie sent by the browser does not match the expected value. This could be for many reasons: temporary high load on the server, race conditions, temporary network unstability, many open tabs in the browser, etc.&lt;br /&gt;
&lt;br /&gt;
In general it can be solved by closing all tabs pointing to &amp;quot;jupyter.pic.es&amp;quot;, cleaning the cookies and connecting back to &amp;quot;jupyter.pic.es&amp;quot;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Dask&amp;diff=1233</id>
		<title>Dask</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Dask&amp;diff=1233"/>
		<updated>2025-07-14T11:33:13Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: Created page with &amp;quot;= Dask =  Dask is a system for scaling out computations in Python. It supports distributed  calculations with large Numpy arrays and Pandas data frames, but also custom comput...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Dask =&lt;br /&gt;
&lt;br /&gt;
Dask is a system for scaling out computations in Python. It supports distributed &lt;br /&gt;
calculations with large Numpy arrays and Pandas data frames, but also custom&lt;br /&gt;
computations and dependencies.&lt;br /&gt;
&lt;br /&gt;
A notebook with instructions on how to run Dask at PIC can be found [https://gitlab.pic.es/services/code-samples/-/blob/main/computing/dask/dask_htcondor.ipynb here]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1232</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1232"/>
		<updated>2025-07-14T11:31:27Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Distributed computing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|PIC in an image]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[faq| Frequently asked questions]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
=== Distributed computing ===&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Dask]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[Hadoop Distributed File System (HDFS)]]&lt;br /&gt;
&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;br /&gt;
&lt;br /&gt;
== More technical information ==&lt;br /&gt;
&lt;br /&gt;
* [[Storage Department]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1231</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1231"/>
		<updated>2025-07-14T11:30:38Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Services */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|PIC in an image]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[faq| Frequently asked questions]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
=== Distributed computing ===&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Dask]]&lt;br /&gt;
* [[Hadoop Distributed File System (HDFS)]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;br /&gt;
&lt;br /&gt;
== More technical information ==&lt;br /&gt;
&lt;br /&gt;
* [[Storage Department]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1200</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1200"/>
		<updated>2025-03-12T20:19:30Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* conda / mamba configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of miniforge/mambaforge) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/mambaforge''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba installation, we recommend you to install the minimal '''miniforge''' distribution, instructions [https://github.com/conda-forge/miniforge here] or [https://github.com/mamba-org/mamba mamba/micromamba]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal. If no specific version is needed you can use the link provided in the example.&lt;br /&gt;
&lt;br /&gt;
First, let's initialize conda for our bash sessions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ /data/astro/software/alma9/conda/miniforge-24.1.2/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, in order to activate the base environment you will have to run this command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ eval &amp;quot;$(/data/astro/software/alma9/conda/miniforge-24.1.2/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For now you can exit the terminal.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== Conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
&lt;br /&gt;
A notebook with instructions on how to run Dask at PIC can be found [https://gitlab.pic.es/services/code-samples/-/blob/main/computing/dask/dask_htcondor.ipynb here]&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1199</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1199"/>
		<updated>2025-02-24T23:30:54Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Install ROOT */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of miniforge/mambaforge) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/mambaforge''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba installation, we recommend you to install the minimal '''miniforge''' distribution, instructions [https://github.com/conda-forge/miniforge here] or [https://github.com/mamba-org/mamba mamba/micromamba]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal. If no specific version is needed you can use the link provided in the example.&lt;br /&gt;
&lt;br /&gt;
First, let's initialize conda for our bash sessions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ /data/astro/software/alma9/conda/miniforge-24.1.2/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, in order to activate the base environment you will have to run this command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ eval &amp;quot;$(/data/astro/software/alma9/conda/miniforge-24.1.2/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For now you can exit the terminal.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
&lt;br /&gt;
A notebook with instructions on how to run Dask at PIC can be found [https://gitlab.pic.es/services/code-samples/-/blob/main/computing/dask/dask_htcondor.ipynb here]&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1198</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1198"/>
		<updated>2025-02-24T23:30:44Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Install ROOT */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of miniforge/mambaforge) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/mambaforge''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba installation, we recommend you to install the minimal '''miniforge''' distribution, instructions [https://github.com/conda-forge/miniforge here] or [https://github.com/mamba-org/mamba mamba/micromamba]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal. If no specific version is needed you can use the link provided in the example.&lt;br /&gt;
&lt;br /&gt;
First, let's initialize conda for our bash sessions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ /data/astro/software/alma9/conda/miniforge-24.1.2/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, in order to activate the base environment you will have to run this command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ eval &amp;quot;$(/data/astro/software/alma9/conda/miniforge-24.1.2/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For now you can exit the terminal.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
&lt;br /&gt;
A notebook with instructions on how to run Dask at PIC can be found [https://gitlab.pic.es/services/code-samples/-/blob/main/computing/dask/dask_htcondor.ipynb here]&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
Environment creation and kernel installation&lt;br /&gt;
&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
After doing this ROOT can be imported from a python shell, but it does not work from a notebook.&lt;br /&gt;
&lt;br /&gt;
* ROOT uses JIT compilation with Cling: https://root.cern/cling/&lt;br /&gt;
* conda provides it's own set of compilation tools: gcc, gxx, fortran&lt;br /&gt;
* Some environment variables are necessary to ensure that these two pieces work well together:&lt;br /&gt;
** PATH: to be able to find the compiler tools&lt;br /&gt;
** CONDA_BUILD_SYSROOT: needed to configure the compiler call&lt;br /&gt;
&lt;br /&gt;
These environment variables are not propagated to the notebook, because they are set by .bashrc conda activate, ... so we need to explicitly set them in the notebook.&lt;br /&gt;
&lt;br /&gt;
     import os&lt;br /&gt;
     import sys&lt;br /&gt;
     from pathlib import Path&lt;br /&gt;
&lt;br /&gt;
     bin_dir = Path(sys.executable).parent&lt;br /&gt;
     os.environ['PATH'] = f'{bin_dir}:{os.environ['PATH']}'&lt;br /&gt;
     os.environ['CONDA_BUILD_SYSROOT'] = str(bin_dir.parent / 'x86_64-conda-linux-gnu/sysroot')&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1194</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1194"/>
		<updated>2025-02-20T19:36:56Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Troubleshooting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of miniforge/mambaforge) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/mambaforge''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba installation, we recommend you to install the minimal '''miniforge''' distribution, instructions [https://github.com/conda-forge/miniforge here] or [https://github.com/mamba-org/mamba mamba/micromamba]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal. If no specific version is needed you can use the link provided in the example.&lt;br /&gt;
&lt;br /&gt;
First, let's initialize conda for our bash sessions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ /data/astro/software/alma9/conda/miniforge-24.1.2/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, in order to activate the base environment you will have to run this command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ eval &amp;quot;$(/data/astro/software/alma9/conda/miniforge-24.1.2/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For now you can exit the terminal.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
&lt;br /&gt;
A notebook with instructions on how to run Dask at PIC can be found [https://gitlab.pic.es/services/code-samples/-/blob/main/computing/dask/dask_htcondor.ipynb here]&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
When installing ROOT in a conda environment from a terminal in a workernode there's a conflict with the value of `LD_LIBRARY_PATH=/opt/hadoop-3.2.3/lib/native:/usr/lib64/` enforced through `/etc/profile.d`. This configuration does not apply to jupyter kernels. So some headers in `/usr/include` can not be found (e.g. `/usr/include/dlfcn.h`) and ROOT can not be imported in a notebook.&lt;br /&gt;
&lt;br /&gt;
So, to install root in a conda environment named &amp;quot;mcdata&amp;quot; from a terminal in jupyter.pic.es and enable using it as a kernel you first have to unset the LD_LIBRARY_PATH variable, that is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    $ unset LD_LIBRARY_PATH&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;br /&gt;
&lt;br /&gt;
== Loading libraries is very slow ==&lt;br /&gt;
Conda environments at storage using hard drives can be extremely slow to load. If you encounter this problem, please ask &lt;br /&gt;
your support contact for a &amp;quot;SSD scratch&amp;quot; location to store environments. Currently this service is being tested and&lt;br /&gt;
deployed as needed.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1193</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1193"/>
		<updated>2025-02-20T18:33:30Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Install ROOT */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of miniforge/mambaforge) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/mambaforge''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba installation, we recommend you to install the minimal '''miniforge''' distribution, instructions [https://github.com/conda-forge/miniforge here] or [https://github.com/mamba-org/mamba mamba/micromamba]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal. If no specific version is needed you can use the link provided in the example.&lt;br /&gt;
&lt;br /&gt;
First, let's initialize conda for our bash sessions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ /data/astro/software/alma9/conda/miniforge-24.1.2/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, in order to activate the base environment you will have to run this command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ eval &amp;quot;$(/data/astro/software/alma9/conda/miniforge-24.1.2/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For now you can exit the terminal.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
&lt;br /&gt;
A notebook with instructions on how to run Dask at PIC can be found [https://gitlab.pic.es/services/code-samples/-/blob/main/computing/dask/dask_htcondor.ipynb here]&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and [[#Logs|check the logs]] to better identify the problem. If you don't see the source of the error, try to [[#Clean_workspaces|clean the workspaces]] and launch a notebook again.&lt;br /&gt;
&lt;br /&gt;
== Install ROOT ==&lt;br /&gt;
&lt;br /&gt;
When installing ROOT in a conda environment from a terminal in a workernode there's a conflict with the value of `LD_LIBRARY_PATH=/opt/hadoop-3.2.3/lib/native:/usr/lib64/` enforced through `/etc/profile.d`. This configuration does not apply to jupyter kernels. So some headers in `/usr/include` can not be found (e.g. `/usr/include/dlfcn.h`) and ROOT can not be imported in a notebook.&lt;br /&gt;
&lt;br /&gt;
So, to install root in a conda environment named &amp;quot;mcdata&amp;quot; from a terminal in jupyter.pic.es and enable using it as a kernel you first have to unset the LD_LIBRARY_PATH variable, that is:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    $ unset LD_LIBRARY_PATH&lt;br /&gt;
    $ micromamba env create -p /data/pic/scratch/torradeflot/envs/mcdata root ipykernel&lt;br /&gt;
    $ micromamba activate mcdata&lt;br /&gt;
    $ python -m ipykernel install --user --name mcdata&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Faq&amp;diff=1190</id>
		<title>Faq</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Faq&amp;diff=1190"/>
		<updated>2025-02-07T21:02:23Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==How do I reset my password?==&lt;br /&gt;
You can reset the password [https://www.pic.es/user/auth/forgotpw here].&lt;br /&gt;
&lt;br /&gt;
==Can an undergraduate student in my group have an account?==&lt;br /&gt;
Yes. No problem.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=ICFO&amp;diff=1186</id>
		<title>ICFO</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=ICFO&amp;diff=1186"/>
		<updated>2025-02-05T09:35:50Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Image viewers ==&lt;br /&gt;
&lt;br /&gt;
There is some limited support for running graphical applications at&lt;br /&gt;
PIC. First you need to log into JupyterLab (jupyter.pic.es) and click&lt;br /&gt;
on the virtual desktop icon (orange D). This will open a graphical &lt;br /&gt;
desktop with a single terminal, from where you can launch the image&lt;br /&gt;
viewers.&lt;br /&gt;
&lt;br /&gt;
=== Fiji ===&lt;br /&gt;
&lt;br /&gt;
Log into the desktop (see above) and run the following command:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
/data/icfo/software/Fiji.app/ImageJ-linux64&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Napari ===&lt;br /&gt;
&lt;br /&gt;
Log into the desktop (see above) and run the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
/data/icfo/software/bin/run_napari&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Faq&amp;diff=1184</id>
		<title>Faq</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Faq&amp;diff=1184"/>
		<updated>2025-01-29T11:01:11Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: Created page with &amp;quot;==How do I reset my password== You can reset the password [https://www.pic.es/user/auth/forgotpw here].&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==How do I reset my password==&lt;br /&gt;
You can reset the password [https://www.pic.es/user/auth/forgotpw here].&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1183</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1183"/>
		<updated>2025-01-29T10:57:47Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Getting started */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|PIC in an image]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[faq| Frequently asked questions]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;br /&gt;
&lt;br /&gt;
== More technical information ==&lt;br /&gt;
&lt;br /&gt;
* [[Storage Department]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1182</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1182"/>
		<updated>2025-01-29T10:57:22Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Getting started */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|PIC in an image]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[Frequently asked questions | faq]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;br /&gt;
&lt;br /&gt;
== More technical information ==&lt;br /&gt;
&lt;br /&gt;
* [[Storage Department]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1181</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1181"/>
		<updated>2025-01-29T10:56:34Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Getting started */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|PIC in an image]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
* [[FAQ | faq]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;br /&gt;
&lt;br /&gt;
== More technical information ==&lt;br /&gt;
&lt;br /&gt;
* [[Storage Department]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1180</id>
		<title>JupyterHub</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=JupyterHub&amp;diff=1180"/>
		<updated>2025-01-17T11:27:53Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* GPUs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
PIC offers a service for running Jupyter notebooks on CPU or GPU resources. This service is primarily thought for code developing or prototyping rather than data processing. The usage is similar to running notebooks on your personal computer but offers the advantage of developing and testing your code on different hardware configurations, as well as facilitating the scalability of the code since it is being tested in the same environment in which it would run on a mass scale. &lt;br /&gt;
&lt;br /&gt;
Since the service is strictly thought for development and small scale testing tasks, a shutdown policy for the sessions has been put in place:&lt;br /&gt;
&lt;br /&gt;
# The maximum duration for a session is 48h.&lt;br /&gt;
# After an idle period of 2 hours, the session will be closed. &lt;br /&gt;
&lt;br /&gt;
In practice that  means that you should estimate the test data volume that you work with during a session to be able to be processed in less than 48 hours.&lt;br /&gt;
&lt;br /&gt;
== How to connect to the service ==&lt;br /&gt;
&lt;br /&gt;
Got to [https://jupyter.pic.es jupyter.pic.es] to see your login screen.&lt;br /&gt;
&lt;br /&gt;
[[File:login.png|700px|Login screen]]&lt;br /&gt;
&lt;br /&gt;
Sign in with your PIC user credentials. This will prompt you to the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterSpawn.png|700px|current]]&lt;br /&gt;
&lt;br /&gt;
Here you can choose the hardware configuration for your Jupyter session. Also, you have to choose the experiment (project) you are working on during the Jupyter session. After choosing a configuration and pressing start the next screen will show you the progress of the initialisation process. Keep in mind that a job containing your Jupyter session is actually sent to the HTCondor queuing system and waiting for available resources before being started. This usually takes less than a minute but can take up to a few depending on our resource usage.&lt;br /&gt;
&lt;br /&gt;
[[File:screen02.png|900px]]&lt;br /&gt;
&lt;br /&gt;
In the next screen you can choose the tool that you want to use for your work: a Python notebook, a Python console or a plain bash terminal.&lt;br /&gt;
For the Python environment (either notebook or environment) you have two default options:&lt;br /&gt;
* the ipykernel version of Python 3&lt;br /&gt;
* the XPython version of Python 3.9, this one allows you to use the integrated debugging module.&lt;br /&gt;
&lt;br /&gt;
Further you see an icon with a &amp;quot;D&amp;quot; - desktop, this one starts a VNC session that allows the use of programs with graphical user interfaces.&lt;br /&gt;
&lt;br /&gt;
Also, recently you can find the icon of Visual Studio, an integrated development environment.&lt;br /&gt;
&lt;br /&gt;
[[File:ScreenshotJupyterlab20231103.png|700px]]&lt;br /&gt;
&lt;br /&gt;
Your python environments should appear under Notebook and Console headers. In a later section we will show you how to create a new environment and to remove an existing one.&lt;br /&gt;
&lt;br /&gt;
== Terminate your session and logout ==&lt;br /&gt;
&lt;br /&gt;
It is important that you terminate your session before you log out. In order to do so, go to the top page menu &amp;quot;'''File''' -&amp;gt; '''Hub Control Panel'''&amp;quot; and you will see the following screen.&lt;br /&gt;
&lt;br /&gt;
[[File:screen04.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Here, click on the '''Stop My Server''' button. After that you can log out by clicking the '''Logout''' button in the right upper corner.&lt;br /&gt;
&lt;br /&gt;
=  Python virtual environments =&lt;br /&gt;
&lt;br /&gt;
This section covers the use of Python virtual environments with Jupyter.&lt;br /&gt;
&lt;br /&gt;
== Initialize conda (we highly recommend the use of miniforge/mambaforge) ==&lt;br /&gt;
&lt;br /&gt;
Before using conda/mamba in your bash session, you have to initialize it.&lt;br /&gt;
* For access to an available conda/mamba installation, please get in contact with your project liaison at PIC. He/she will give you the actual value for the '''/path/to/mambaforge''' placeholder.&lt;br /&gt;
* If you want to use your own conda/mamba installation, we recommend you to install the minimal '''miniforge''' distribution, instructions [https://github.com/conda-forge/miniforge here] or [https://github.com/mamba-org/mamba mamba/micromamba]&lt;br /&gt;
&lt;br /&gt;
Log onto Jupyter and start a session. On the homepage of your Jupyter session, click on the terminal button on the session dashboard on the right to open a bash terminal. If no specific version is needed you can use the link provided in the example.&lt;br /&gt;
&lt;br /&gt;
First, let's initialize conda for our bash sessions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ /data/astro/software/alma9/conda/miniforge-24.1.2/bin/mamba init&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This actually changes the .bashrc file in your home directory in order to activate the base environment on login.&lt;br /&gt;
To avoid that the base environment is activated every time you log on to a node, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, in order to activate the base environment you will have to run this command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ eval &amp;quot;$(/data/astro/software/alma9/conda/miniforge-24.1.2/bin/conda shell.bash hook)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For now you can exit the terminal.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Link an existing environment to Jupyter ==&lt;br /&gt;
&lt;br /&gt;
You can find instructions on how to create your own environments, e.g. [[#Create_virtual_environments_with_venv_or_conda | here]].&lt;br /&gt;
&lt;br /&gt;
Log into Jupyter, start a session. From the session dashboard choose the bash terminal.&lt;br /&gt;
&lt;br /&gt;
Inside the terminal, activate your environment.&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments:&lt;br /&gt;
* if you created the environment without a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the name of your environment. &lt;br /&gt;
* if you created the environment with a prefix:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/environment&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The parenthesis (...) in front of your bash prompt show the absolute path of your environment. &lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ source /path/to/environment/bin/activate&lt;br /&gt;
(...) [neissner@td110 ~]$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Link the environment to a Jupyter kernel. For both, '''conda/mamba''' and '''venv''':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ python -m  ipykernel install --user --name=whatever_kernel_name&lt;br /&gt;
Installed kernelspec whatever_kernel_name in &lt;br /&gt;
                         /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don't have the '''ipykernel''' module installed in your environment you may receive an error message like the one below when trying to run the previous command.&lt;br /&gt;
&amp;lt;pre&amp;gt;No module named ipykernel&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this is the case, you need to install it by running: '''pip install ipykernel'''&lt;br /&gt;
&lt;br /&gt;
Deactivate your environment. &lt;br /&gt;
&lt;br /&gt;
For conda:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For venv:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(...) [neissner@td110 ~]$ deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can exit the terminal. After refreshing the Jupyter page your whatever_kernel_name appears in the dashboard. In this example '''test''' has been used for whatever_kernel_name&lt;br /&gt;
&lt;br /&gt;
[[File:screen05.png|700px]]&lt;br /&gt;
&lt;br /&gt;
== Unlink an environment from Jupyter ==&lt;br /&gt;
Log onto Jupyter, start a session and from the session dashboard choose the bash terminal. To remove your environment/kernel from Jupyter run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ jupyter kernelspec uninstall whatever_kernel_name&lt;br /&gt;
Kernel specs to remove:&lt;br /&gt;
  whatever_kernel_name     /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
Remove 1 kernel specs [y/N]: y&lt;br /&gt;
[RemoveKernelSpec] Removed /nfs/pic.es/user/n/neissner/.local/share/jupyter/kernels/whatever_kernel_name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Keep in mind that, although not available in Jupyter anymore, the environment still exists. Whenever you need it, you can link it again.&lt;br /&gt;
&lt;br /&gt;
== Create virtual environments with venv or conda ==&lt;br /&gt;
&lt;br /&gt;
Before creating a new environment, please get in contact with your project liaison at PIC as there may be already a suitable environment for your needs in place.&lt;br /&gt;
&lt;br /&gt;
If none of the existing environments suits your needs, you can create a new environment.&lt;br /&gt;
First, create a directory in a suitable place to store the environment. For single-user environments, place them in your home under ~/env. For environments that will be shared with other project users, contact your project liaison and ask him/her for a path in a shared storage volume that is visible to all of them.&lt;br /&gt;
&lt;br /&gt;
Once you have the location (i.e. /path/to/env/folder), create the environment with the following commands:&lt;br /&gt;
&lt;br /&gt;
For '''venv''' environments '''(recommended)'''&lt;br /&gt;
&lt;br /&gt;
If your_env is installed at /path/to/env/your_env&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ python3 -m venv your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ cd /path/to/env&lt;br /&gt;
[neissner@td110 ~]$ source your_env/bin/activate&lt;br /&gt;
(...)[neissner@td110 ~]$ pip install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For '''conda/mamba''' environments&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba create --prefix /path/to/env/your_env&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The list of modules (module1, module2, ...) is optional. For instance, for a python3 environment with scipy you would specify: ''python=3 scipy'' &lt;br /&gt;
&lt;br /&gt;
Now you should be able to activate your environment and install additional modules&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[neissner@td110 ~]$ mamba activate /path/to/env/folder/your_env&lt;br /&gt;
(...)[neissner@td110 ~]$ mamba install additional_module1 additional_module2 ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can use pip install inside a mamba environment, however, resolving dependencies might require installing additional packages manually.&lt;br /&gt;
&lt;br /&gt;
== conda / mamba configuration ==&lt;br /&gt;
&lt;br /&gt;
The behaviour of conda/mamba can be configured through the &amp;quot;$HOME/.condarc&amp;quot; file, described [https://docs.conda.io/projects/conda/en/latest/configuration.html here]. Some interesting parameters:&lt;br /&gt;
&lt;br /&gt;
* envs_dirs: The list of directories to search for named environments. E.g.: different locations where you created environments&lt;br /&gt;
&lt;br /&gt;
  envs_dirs:&lt;br /&gt;
    - /data/pic/scratch/torradeflot/envs&lt;br /&gt;
    - /data/astro/scratch/torradeflot/envs&lt;br /&gt;
    - /data/aai/scratch/torradeflot/envs&lt;br /&gt;
&lt;br /&gt;
* pkgs_dirs: Folder where to store conda packages&lt;br /&gt;
&lt;br /&gt;
= Proper usage of X509 based proxies =&lt;br /&gt;
&lt;br /&gt;
We found recently that the usage of proxies within a Jupyter session might cause problems because the environment changes certain standard locations such as '''/tmp'''&lt;br /&gt;
&lt;br /&gt;
For a correct functioning please create the proxy the following way, example for Virgo:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /bin/voms-proxy-init --voms virgo:/virgo/ligo --out ./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=./x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
ls: cannot access /cvmfs/ligo.osgstorage.org: Permission denied&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the proxy cannot properly be located. Therefore we have to put the complete path into the variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ pwd&lt;br /&gt;
/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ export X509_USER_PROXY=/nfs/pic.es/user/&amp;lt;letter&amp;gt;/&amp;lt;user&amp;gt;/x509up_u$(id -u)&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ ls /cvmfs/ligo.osgstorage.org&lt;br /&gt;
frames  powerflux  pycbc  test_access&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Software of particular interest =&lt;br /&gt;
== SageMath ==&lt;br /&gt;
&lt;br /&gt;
[https://www.sagemath.org/ SageMath] is particularly interesting for Cosmology because it allows symbolic calculations, e.g. deriving the equations of motions for the scale factor starting from a customised space-time metric. &lt;br /&gt;
&lt;br /&gt;
=== Standard cosmology examples ===&lt;br /&gt;
&lt;br /&gt;
* The Friedman equations for the FLRW solution of the Einstein equations.&lt;br /&gt;
You can find the corresponding Notebook in any PIC terminal at '''/data/astro/software/notebooks/FLRW_cosmology.ipynb'''&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/FLRW_cosmology_solutions.ipynb''' uses known analytical solutions of the FLRW cosmology and produces this image for the evolution of the scale factor:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage05.png|300px]]&lt;br /&gt;
&lt;br /&gt;
* The notebook you can find at '''/data/astro/software/notebooks/Interior_Schwarzschild.ipynb''' shows the formalism for the interior Schwarzschild metric and displays the solutions for density and pressure of a static celestial object that is sufficiently larger than its Schwarzschild radius. The pressure for an object with constant density id shown in the image:&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot_Sage06.png|300px]]&lt;br /&gt;
&lt;br /&gt;
=== Enabling SageMath environment in Jupyter ===&lt;br /&gt;
&lt;br /&gt;
If you have never initialized mamba, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ /data/astro/software/centos7/conda/mambaforge_4.14.0/bin/mamba init&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ conda config --set auto_activate_base false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After that you can enable SageMath for its use in a Jupyter notebook session: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~] mamba activate /data/astro/software/envs/sage&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ python -m  ipykernel install --user --name=sage&lt;br /&gt;
....&lt;br /&gt;
(/data/astro/software/envs/sage) [&amp;lt;user&amp;gt;@&amp;lt;hostname&amp;gt; ~]$ mamba deactivate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a file in you home '''~/.local/share/jupyter/kernels/sage/kernel.json'''&lt;br /&gt;
which has to be modified to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/data/astro/software/envs/sage/bin/sage&amp;quot;,&lt;br /&gt;
  &amp;quot;--python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;sage.repl.ipython_kernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;sage&amp;quot;,&lt;br /&gt;
 &amp;quot;metadata&amp;quot;: {&lt;br /&gt;
  &amp;quot;debugger&amp;quot;: true&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you go to your Jupyter dashboard you will find the sage environment listed there.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dask =&lt;br /&gt;
&lt;br /&gt;
A notebook with instructions on how to run Dask at PIC can be found [https://gitlab.pic.es/services/code-samples/-/blob/main/computing/dask/dask_htcondor.ipynb here]&lt;br /&gt;
&lt;br /&gt;
= Using a singularity image as a jupyter kernel =&lt;br /&gt;
&lt;br /&gt;
Sometimes the software stack of some projects may be provided in the shape of a singularity image, it will then be convenient to use this image as a kernel for the notebooks in jupyter.pic.es.&lt;br /&gt;
&lt;br /&gt;
The singularity image to be used as a kernel needs to fullfill some requirements. Different requirements will apply depending on the programming language.&lt;br /&gt;
&lt;br /&gt;
== python jupyter kernel in a singularity image ==&lt;br /&gt;
&lt;br /&gt;
The singularity image needs to have the '''python''' and the '''ipykernel''' module installed. &lt;br /&gt;
&lt;br /&gt;
* Create the folder that will host the kernel definition&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.local/share/jupyter/kernels/singularity&lt;br /&gt;
&lt;br /&gt;
* Create the '''kernel.json''' file inside it with the following content:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;argv&amp;quot;: [&lt;br /&gt;
     &amp;quot;singularity&amp;quot;,&lt;br /&gt;
     &amp;quot;exec&amp;quot;,&lt;br /&gt;
     &amp;quot;--cleanenv&amp;quot;,&lt;br /&gt;
     &amp;quot;/path/to/the/singularity/image.sif&amp;quot;,&lt;br /&gt;
     &amp;quot;python&amp;quot;,&lt;br /&gt;
     &amp;quot;-m&amp;quot;,&lt;br /&gt;
    &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
     &amp;quot;-f&amp;quot;,&lt;br /&gt;
     &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
   ],&lt;br /&gt;
   &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
   &amp;quot;display_name&amp;quot;: &amp;quot;singularity-kernel&amp;quot;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Refresh or start the jupyterlab interface and the singularity kernel should appear in the launcher tab&lt;br /&gt;
&lt;br /&gt;
= GPUs =&lt;br /&gt;
&lt;br /&gt;
The way to identify the GPUs that are assigned to your job is:&lt;br /&gt;
* check the environment variable CUDA_VISIBLE_DEVICES. In a terminal run &amp;quot;echo $CUDA_VISIBLE_DEVICES&amp;quot;. The environment variable contains a list of comma-separated GPU ids. With this you will already know how many gpus are assigned to your job. If the variable does not exist, there are no gpus assigned to the job&lt;br /&gt;
&lt;br /&gt;
* list the gpus with nvidia-smi, in a terminal run &amp;quot;nvidia-smi -L&amp;quot;, and look for the gpus you've been assigned. Remember their indexes (integers from 0 to 7)&lt;br /&gt;
&lt;br /&gt;
* The two steps above can be done with the following command:&lt;br /&gt;
&lt;br /&gt;
nvidia-smi -L | grep  $CUDA_VISIBLE_DEVICES&lt;br /&gt;
&lt;br /&gt;
if only having a single assigned GPU.&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_id_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
* in the GPU dashboard the gpus are identified with their index&lt;br /&gt;
&lt;br /&gt;
[[File:check_gpu_resources_highlighted.png]]&lt;br /&gt;
&lt;br /&gt;
= Jupyterlab user guide =&lt;br /&gt;
&lt;br /&gt;
You can find the official documentation of the currently installed version of jupyterlab (3.6) [https://jupyterlab.readthedocs.io/en/4.2.x/ here], there you will find instruction on how to &lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/commands.html Access the command palette]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/toc.html Build a Table Of Contents]&lt;br /&gt;
* [https://jupyterlab.readthedocs.io/en/4.2.x/user/debugger.html Debug your code]&lt;br /&gt;
&lt;br /&gt;
A set of non-official jupyterlab extensions are installed to provide additional functionalities&lt;br /&gt;
&lt;br /&gt;
== jupytext ==&lt;br /&gt;
Pair your notebooks with text files to enhance version tracking.&lt;br /&gt;
https://jupytext.readthedocs.io&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
If you had a notebook (.ipynb file) containing only the cell below, tracked in a git repository&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%matplotlib inline&lt;br /&gt;
import numpy as np&lt;br /&gt;
import matplotlib.pyplot as plt&lt;br /&gt;
plt.imshow(np.random.random([10, 10]))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different executions of the cell would produce different images, and the images are embedded in a pseudo-binary format inside the notebook file. In this case, doing a '''git diff''' of the .ipynb file would produce a huge output (because the image changed), even if there wasn't any change in the code. It is thus convenient to sync the notebook with a text file (e.g. a .py script) using the jupytext extension and track this one with git. The outputs, including images, as well as some additional metadata, won't be added to the synced text file. So in the case of different executions of the same notebook, the diff will always be empty.&lt;br /&gt;
&lt;br /&gt;
== git ==&lt;br /&gt;
Sidebar GUI to git repo management&lt;br /&gt;
https://github.com/jupyterlab/jupyterlab-git&lt;br /&gt;
&lt;br /&gt;
== variable inspector ==&lt;br /&gt;
Variable inspection à la Matlab&lt;br /&gt;
https://github.com/jupyterlab-contrib/jupyterlab-variableInspector&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
= Code samples =&lt;br /&gt;
&lt;br /&gt;
A repository with sample code can be found here: https://gitlab.pic.es/services/code-samples/&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== Logs ==&lt;br /&gt;
&lt;br /&gt;
The log files for the jupyterlab server are stored in &amp;quot;~/.jupyter&amp;quot;. The log files will be created once the jupyter lab server job is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Clean workspaces ==&lt;br /&gt;
&lt;br /&gt;
Jupyterlab stores the workspace status in the &amp;quot;~/.jupyter/lab/workspaces&amp;quot; folder. If you want to start with a fresh (empty) workspace, delete all the content of this folder before launching the notebook.&lt;br /&gt;
&lt;br /&gt;
    cd ~/.jupyter/lab/workspaces&lt;br /&gt;
    rm *&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 504 Gateway timeout ==&lt;br /&gt;
&lt;br /&gt;
The notebook job is running in HTCondor but the user can not access the notebook server. Ultimately a 504 error is received.&lt;br /&gt;
&lt;br /&gt;
This is probably because there's some error when starting the jupyterlab server. First of all, shutdown the notebook server and check the logs to better identify the problem. If you don't see the source of the error, try clean the workspaces and launching a notebook again.&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=ICFO&amp;diff=1179</id>
		<title>ICFO</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=ICFO&amp;diff=1179"/>
		<updated>2025-01-10T11:53:05Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: Created page with &amp;quot;== Fiji ==  For starting Fiji, you need to start to log into Jupyterlab, start the virtual desktop and start:  /data/icfo/software/Fiji.app&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Fiji ==&lt;br /&gt;
&lt;br /&gt;
For starting Fiji, you need to start to log into Jupyterlab, start the virtual desktop and&lt;br /&gt;
start:&lt;br /&gt;
&lt;br /&gt;
/data/icfo/software/Fiji.app&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1178</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Main_Page&amp;diff=1178"/>
		<updated>2025-01-10T11:52:29Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: /* Experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Getting started ==&lt;br /&gt;
* [[PIC description|PIC in an image]]&lt;br /&gt;
* [[PIC account|Get a PIC account]]&lt;br /&gt;
* [[PIC_User_Manual | User manual]]&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
* [[HTCondor]]&lt;br /&gt;
* [[Storage]]&lt;br /&gt;
* [[JupyterHub]]&lt;br /&gt;
* [[Gitlab]]&lt;br /&gt;
* [[CosmoHub]]&lt;br /&gt;
* Spark:&lt;br /&gt;
** [[Spark on Hadoop|on Hadoop]]&lt;br /&gt;
** [[Spark_on_farm|on HTCondor]]&lt;br /&gt;
* [[Transferring data to/from PIC]]&lt;br /&gt;
&lt;br /&gt;
== Experiments ==&lt;br /&gt;
&lt;br /&gt;
* [[Euclid]]&lt;br /&gt;
* [[AGN ICE]]&lt;br /&gt;
* [[ICFO]]&lt;br /&gt;
&lt;br /&gt;
== More technical information ==&lt;br /&gt;
&lt;br /&gt;
* [[Storage Department]]&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Galfit&amp;diff=1150</id>
		<title>Galfit</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Galfit&amp;diff=1150"/>
		<updated>2024-04-25T08:59:04Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For quickly getting Galfit working on PIC, do the following: &lt;br /&gt;
Add the lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export LD_LIBRARY_PATH=/data/agn/scratch2/eriksen/galfit_depend/:$LD_LIBRARY_PATH&lt;br /&gt;
export PATH=$PATH:/data/agn/scratch2/eriksen/galfit/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to your &amp;quot;~/.bashrc&amp;quot; file, logout and login again. Then the following works:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
galfit /data/agn/scratch2/eriksen/galfit/galfit-example/EXAMPLE/galfit.feedme&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 700px;&amp;quot;&amp;gt;&lt;br /&gt;
A more detailed description: &amp;lt;br&amp;gt;&lt;br /&gt;
The problem with Galfit, is the code was last distributed on April 23, 2013&lt;br /&gt;
in binary form. Only downloading the binary (Redhat/Enterprise 64) cause a&lt;br /&gt;
problem with missing libsnl library. This note explains how to get galfit&lt;br /&gt;
working at PIC.&lt;br /&gt;
&lt;br /&gt;
The fastest way is downloading the Redhat version and compile libsnl from&lt;br /&gt;
source. This is relatively easy, but as indicated above, we have both&lt;br /&gt;
downloaded and compiled this dependency. Setting these variables in your&lt;br /&gt;
.bashrc makes the system find the binary ($PATH) and the library&lt;br /&gt;
($LD_LIBRARY_PATH). Note that you might need to specify these when&lt;br /&gt;
submitting jobs through HTCondor.&lt;br /&gt;
&lt;br /&gt;
For documentation and in the case someone wants their own installation or&lt;br /&gt;
later needs to reproduce this:&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1) Download the Galfit source code: &amp;lt;br&amp;gt;&lt;br /&gt;
wget https://users.obs.carnegiescience.edu/peng/work/galfit/galfit3-enterprise64.tar.gz&lt;br /&gt;
tar -xf galfit3-enterprise64.tar.gz&lt;br /&gt;
&lt;br /&gt;
2) Compile and install libsnl&lt;br /&gt;
&lt;br /&gt;
Download libsnl:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget https://github.com/thkukuk/libnsl/releases/download/v2.0.1/libnsl-2.0.1.tar.xz&lt;br /&gt;
tar -xf libnsl-2.0.1.tar.xz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compile libsnl:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./configure --prefix /data/agn/scratch2/eriksen/galfit_depend&lt;br /&gt;
make&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Galfit&amp;diff=1149</id>
		<title>Galfit</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Galfit&amp;diff=1149"/>
		<updated>2024-04-25T08:58:27Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For quickly getting Galfit working on PIC, do the following: &lt;br /&gt;
Add the lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export LD_LIBRARY_PATH=/data/agn/scratch2/eriksen/galfit_depend/:$LD_LIBRARY_PATH&lt;br /&gt;
export PATH=$PATH:/data/agn/scratch2/eriksen/galfit/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then the following works:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
galfit /data/agn/scratch2/eriksen/galfit/galfit-example/EXAMPLE/galfit.feedme&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width: 700px;&amp;quot;&amp;gt;&lt;br /&gt;
A more detailed description: &amp;lt;br&amp;gt;&lt;br /&gt;
The problem with Galfit, is the code was last distributed on April 23, 2013&lt;br /&gt;
in binary form. Only downloading the binary (Redhat/Enterprise 64) cause a&lt;br /&gt;
problem with missing libsnl library. This note explains how to get galfit&lt;br /&gt;
working at PIC.&lt;br /&gt;
&lt;br /&gt;
The fastest way is downloading the Redhat version and compile libsnl from&lt;br /&gt;
source. This is relatively easy, but as indicated above, we have both&lt;br /&gt;
downloaded and compiled this dependency. Setting these variables in your&lt;br /&gt;
.bashrc makes the system find the binary ($PATH) and the library&lt;br /&gt;
($LD_LIBRARY_PATH). Note that you might need to specify these when&lt;br /&gt;
submitting jobs through HTCondor.&lt;br /&gt;
&lt;br /&gt;
For documentation and in the case someone wants their own installation or&lt;br /&gt;
later needs to reproduce this:&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1) Download the Galfit source code: &amp;lt;br&amp;gt;&lt;br /&gt;
wget https://users.obs.carnegiescience.edu/peng/work/galfit/galfit3-enterprise64.tar.gz&lt;br /&gt;
tar -xf galfit3-enterprise64.tar.gz&lt;br /&gt;
&lt;br /&gt;
2) Compile and install libsnl&lt;br /&gt;
&lt;br /&gt;
Download libsnl:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget https://github.com/thkukuk/libnsl/releases/download/v2.0.1/libnsl-2.0.1.tar.xz&lt;br /&gt;
tar -xf libnsl-2.0.1.tar.xz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compile libsnl:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./configure --prefix /data/agn/scratch2/eriksen/galfit_depend&lt;br /&gt;
make&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
	<entry>
		<id>https://pwiki.pic.es/index.php?title=Galfit&amp;diff=1148</id>
		<title>Galfit</title>
		<link rel="alternate" type="text/html" href="https://pwiki.pic.es/index.php?title=Galfit&amp;diff=1148"/>
		<updated>2024-04-25T08:56:12Z</updated>

		<summary type="html">&lt;p&gt;Eriksen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;tldr; &lt;br /&gt;
Add the lines:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export LD_LIBRARY_PATH=/data/agn/scratch2/eriksen/galfit_depend/:$LD_LIBRARY_PATH&lt;br /&gt;
export PATH=$PATH:/data/agn/scratch2/eriksen/galfit/bin&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then the following works:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
galfit /data/agn/scratch2/eriksen/galfit/galfit-example/EXAMPLE/galfit.feedme&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What is the problem:&lt;br /&gt;
The problem with Galfit, is the code was last distributed on April 23, 2013&lt;br /&gt;
in binary form. Only downloading the binary (Redhat/Enterprise 64) cause a&lt;br /&gt;
problem with missing libsnl library. This note explains how to get galfit&lt;br /&gt;
working at PIC.&lt;br /&gt;
&lt;br /&gt;
The fastest way is downloading the Redhat version and compile libsnl from&lt;br /&gt;
source. This is relatively easy, but as indicated above, we have both&lt;br /&gt;
downloaded and compiled this dependency. Setting these variables in your&lt;br /&gt;
.bashrc makes the system find the binary ($PATH) and the library&lt;br /&gt;
($LD_LIBRARY_PATH). Note that you might need to specify these when&lt;br /&gt;
submitting jobs through HTCondor.&lt;br /&gt;
&lt;br /&gt;
For documentation and in the case someone wants their own installation or&lt;br /&gt;
later needs to reproduce this:&lt;br /&gt;
&lt;br /&gt;
1) Download the Galfit source code: &amp;lt;br&amp;gt;&lt;br /&gt;
wget https://users.obs.carnegiescience.edu/peng/work/galfit/galfit3-enterprise64.tar.gz&lt;br /&gt;
tar -xf galfit3-enterprise64.tar.gz&lt;br /&gt;
&lt;br /&gt;
2) Compile and install libsnl&lt;br /&gt;
&lt;br /&gt;
Download libsnl:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget https://github.com/thkukuk/libnsl/releases/download/v2.0.1/libnsl-2.0.1.tar.xz&lt;br /&gt;
tar -xf libnsl-2.0.1.tar.xz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compile libsnl:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./configure --prefix /data/agn/scratch2/eriksen/galfit_depend&lt;br /&gt;
make&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Eriksen</name></author>
	</entry>
</feed>