GLAM Jupyter Notebooks

We are pleased to introduce you to our collection of Jupyter Notebooks based on GLAM institutions. The International GLAM Labs community, the Open a GLAM Lab book and the labber Tim Sherratt with his GLAM Workbench have been a great inspiration for this project. Additional information about the webinar «Setting Up A GLAM Workbench In Your Library», organised by LIBER’s Digital Humanities Working Group, is provided here.

The notebooks are available in GitHub classified by type of project: images, Linked Open Data, and metadata and text. Additional notebooks provided by ONB Labs and developed by Stefan Karner, have been integrated into the collection.

They are also citable (in Zenodo) and have been assigned a DOI.

In order to launch the notebooks in the cloud, click on the Binder button. Each notebook is based on a dataset provided by a GLAM institution.

Have fun!

This notebook analyses ONB Labshistoric postcards and generates individual color swatches from the images available via IIIF. It’s based on a Jupyter Notebook by Laura Wrubel colour clustering images of the Library of Congress and hence a good example that openly licensed code helps your peers!

This notebook filters the ONB Labshistoric postcards for landscapes with mountains. Check it out and find out what other interesting information hides in the library’s metadata.

This notebook extracts a dataset as a CSV file from a digital collection described using marc xml files. It uses a dataset from the Moving Image Archive catalogue.

This notebook retrieves the covers to create a composite image from the Linked Open Data repository

This notebook uses the British National Bibliography Linked Data Platform to retrieve places of publication and create a map based on GeoNames and Wikidata.

This notebook shows how to exploit the editions of les fleurs du mal de baudelaire using network graphs from

This notebook introduces how to explore Smithsonian Open Access to apply computer vision methods in face detection.


This notebook extracts a dataset from the Europeana IIIF APIs. It performs an automatic search, retrieving the manifests from the IIIF server to create a dataset with the metadata as a CSV file.

This notebook extracts a dataset as a CSV file based on la Russie illustrée which is a periodical with 15 volumes and 748 issues. The digital content can be retrieved UGent libraries.

This notebook is an example of Topic Modeling based on Digitised Volumes of theatrical English, Scottish, and Irish playbills between 1600 – 1902 from

Este sitio web utiliza cookies para que usted tenga la mejor experiencia de usuario. Si continúa navegando está dando su consentimiento para la aceptación de las mencionadas cookies y la aceptación de nuestra política de cookies, pinche el enlace para mayor información.

Aviso de cookies