Lectures

Beyond the Canvas: Multimodal Interpretation of Art Masterpieces via Google Nano Banana 2

Aldis Ērglis

How does a cutting-edge vision model "experience" the brushstrokes of Van Gogh or the symmetry of Da Vinci? This session investigates the frontier of art analysis using Nano Banana 2, moving past traditional metadata into the realm of generative insight. By utilizing advanced multimodal prompting, we demonstrate how GenAI can synthesize historical context, analyze color theory, and interpret complex iconographies in real-time. We explore the model’s ability to detect subtle emotional undertones and stylistic anomalies that define a master’s "visual signature." This project showcases a shift from rigid data processing to fluid, intelligent dialogue with art, proving that GenAI is not just a tool for creation, but a sophisticated engine for cultural deconstruction and archival discovery.

Aldis Ērglis, a seasoned professional in the IT industry, and ways in which IT technologies could help companies achieve their strategic goals. He thrives on staying at the forefront of innovations, understanding the impact of Intelligent Automation, AI/ML and Data & Analytics. Over a 25+ year career, Aldis worked in different roles in the Software Development and Engineering Consulting domain as an expert and in different level management positions. Always been active in building local technology communities. His passion is data visualization.

Computer as Archive, Computer as Agent: Tracing the Evolution of the Digital Humanities from the 1960s to an Uncertain Future

Tessa Gengnagel

In order to understand the current state of the digital humanities, we must look towards its past. Although some histories have been written, much remains undiscovered and underdiscussed. This lecture will propose a narrative of the field’s trajectory through the lens of ‘ideologies of knowledge work’. Starting with the international symposium for literary and linguistic computing in Tübingen in 1960, the lecture will spotlight both global and local developments with a particular focus on discourses split between East and West Germany during the Cold War. Leading up to present times, the lecture will reflect on meaning-making in DH and the challenges that arise from frameworks of operation beyond academic control. This includes agentic AI and the question of interdependence.

Dr. Tessa Gengnagel is managing co-director of the Cologne Center for eHumanities (CCeH) at the University of Cologne. After completing her B.A. in History and Latin Philology of the Middle Ages at the University of Freiburg, she studied European Multimedia Arts & Cultural Heritage Studies at the Universities of Cologne and Graz. During her M.A., she began working at the CCeH in 2013 as a student assistant. She defended her doctoral thesis, supervised by Prof. Manfred Thaller, in January 2021. It was published as a monograph in February 2024 and awarded with the Offermann-Hergarten-Prize. Her research interests are the digital scholarly edition of multimodal cultural heritage, modelling theory, and the epistemology and history of the digital humanities.


Decoding 18th-century British Masonic Print Culture: Press Trends, Publication Networks, and Constitutional Authorship Attribution

Róbert Péter

In this lecture I present a set of case studies on 18th-century British Freemasonry. I show how combining newspaper text analysis with metadata visualisations—using, for example, topic modelling and n-gram analysis—can surface previously unnoticed patterns. This approach led to new discoveries, including an influential theatrical performance and a Masonic newspaper editor who helped shape public perceptions of the movement for decades. I also discuss joint work with Alejandro N. Jawerbaum on the 1723 Constitutions, where stylometric analysis shows how authorship attribution can challenge long-standing historical assumptions, and why cross-checking results across different methods and parameter choices leads to more reliable conclusions. Throughout, I will share practical lessons—and the failures—that made these workflows better.

Róbert Peter is Associate Professor in the Department of English at the University of Szeged, Hungary, where he is also the founding head of the Digital Humanities Laboratory. He holds Master’s degrees in English and Mathematics. His research centres on eighteenth-century studies, digital humanities methods, and the development of research tools, including AVOBMAT (Analysis and Visualization of Bibliographic Metadata and Texts). He is a founding editor of Digitális Bölcsészet (Digital Humanities) and a member of the DARIAH Bibliographic Data Working Group. He also served as general editor of the five-volume primary source collection British Freemasonry, 1717–1813 (Routledge, 2016).

Structured Data Approaches to Historical Collections: Methods and Insights

Sonja Dorfbauer, Simon Mayer

This lecture showcases three projects using structured data to explore historical collections. Participants will see how interactive Jupyter notebooks, machine learning-assisted OCR, and data-driven visualization can transform textual and image-based sources, from pamphlets to private libraries. Each project approaches different materials and research questions, yet together they demonstrate the diverse ways structured datasets and computational methods can illuminate cultural heritage. The lecture emphasizes both common methodological frameworks and the unique challenges of analyzing and visualizing varied historical sources, offering a practical and engaging perspective on the potentials of structured data in digital humanities research.

Sonja Dorfbauer is a software developer at the ONB labs. She develops and refines technically driven Jupyter notebooks that build a bridge between computational methods and digital humanities.

Simon Mayer is a software developer at the Austrian National Library. He works on the design, development and implementation of projects that leverage artificial intelligence and machine learning for the cultural heritage sector.

Workshops 

Data Cleaning, Analysis, and Visualization with OpenRefine and Orange

Lars Kjær

This two-day workshop covers working with datasets using computer programs.

On the first day, we focus on messy datasets that may be easy for humans to read but difficult for computers, and learn how to improve datasets using OpenRefine. OpenRefine is open-source and a practical tool for improving data quality in Excel or CSV files.

On the second day, we focus on well-organised datasets for analysis and visualisation using Orange Data Mining. Orange is also open-source and helps understand data science concepts such as filtering, analysis, visualisation, and machine learning; it can also assist with everyday tasks.

Lars Kjær is Special Advisor in Digital Humanities at Copenhagen University Library | Royal Danish Library. He holds an academic background in history from the University of Copenhagen and has completed examinations in Digital Data Analysis and Data Mining from the IT University of Copenhagen. Lars is skilled in Python programming, linguistic software, natural language processing, text and data mining, photogrammetry, and GIS tools. Through his work in facilitating workshops and providing guidance at Copenhagen University Library, he assists students and researchers at Copenhagen University in acquiring new digital humanities skills. Additionally, by building new cultural heritage datasets, he helps the Royal Danish Library transform its collections into research data packages.

Network Analysis for Humanists

Giovanni Pietro Vitali

This course is designed to introduce participants to the fundamental concepts of network analysis. It is structured into three main sections, progressively guiding learners from basic examples to practical application using Gephi, a network visualization tool.

Examples of Network Analysis – this section provides an introduction to network analysis through real-world examples. Participants will explore various types of data visualization applied to network structures.

Preparing Your Data – data preparation is a crucial step in network analysis. This part covers formatting data for visualization and ensuring its usability in network modeling.

Building Your Own Network – a hands-on tutorial on creating a network using Gephi. Participants will learn how to visualize and analyze network structures through the software.

Giovanni Pietro Vitali is Associate Professor in Cultural History and Digital Humanities at Versailles Saint-Quentin-en-Yvelines University – Paris-Saclay University. Previously he was Marie Curie Research Fellow at University College Cork in collaboration with the University of Reading and New York University. His MSCA project, Last Letters from the World Wars: Forming Italian Language, Identity and Memory in Texts of Conflict, dealt with a linguistic and thematic analysis of the last letters of people sentenced to death during the First and the Second World Wars. From 2014 to 2018, he worked in France as a lecturer of Italian Studies at the University of Lorraine and the University of Poitiers. In 2018 he became an associate researcher at University of Oxford where he is the Digital Humanities advisor of the Prismatic Translation project.
This course is designed to introduce participants to the fundamental concepts of network analysis. It is structured into three main sections, progressively guiding learners from basic examples to practical application using Gephi, a network visualization tool.

Using LLMs in Humanities Research via API

Valdis Saulespurēns

In this workshop, participants will learn how to access large language models via API and utilize them for bulk data analysis using Python. Through practical examples, we will explore prompt engineering techniques for tasks such as concept mining and named entity recognition in textual data. Additionally, we will examine challenges associated with historical digitized texts, including optical character recognition (OCR) errors, which may affect compatibility with language models. Participants will gain insights into how these models can be leveraged for error correction and translation, enhancing the usability of imperfect textual data.

The workshop is designed for researchers, data analysts, and professionals in text analysis, digital humanities, and computational linguistics. Only a basic familiarity with Python is required, which can be gained by attending introductory workshops at the summer school or reviewing the provided preparatory materials.

Valdis Saulespurēns works as a researcher and developer at the National Library of Latvia. Additionally, he is a lecturer at Riga Technical University, where he teaches Python, JavaScript, and other computer science subjects. Valdis has a specialization in Machine Learning and Data Analysis, and he enjoys transforming disordered data into structured knowledge. With more than 30 years of programming experience, Valdis began his professional career by writing programs for quantum scientists at the University of California, Santa Barbara. Before moving into teaching, he developed software for a radio broadcast equipment manufacturer. Valdis holds a Master's degree in Computer Science from the University of Latvia. When not working or spending time with his family, Valdis enjoys biking and playing chess, sometimes even at the same time.


Data Visualization for Public-Facing Research

Anda Baklāne

The workshop introduces the basic principles of data visualization shared across data analytics and the digital humanities. It presents examples of visualizations and visual storytelling drawn from digital humanities projects, and discusses the methods used to create them. Exploratory and academic visualizations are often produced using tools such as Excel, Python, or R, where precision and accuracy are the primary goals. Academic work also tends to rely on static formats, as interactive visualizations are not always feasible. However, simple aesthetic graphs are often not sufficient to communicate data effectively to broader audiences or to support public-facing scholarship and science communication. The practical part of the workshop focuses on creating graphs, maps, networks, and interactive visual stories using the data visualization platform Flourish.

Anda Baklāne is a researcher and curator of digital research services at the National Library of Latvia. She teaches digital humanities, data analysis, and visualization at the University of Latvia. With a background in literary studies and philosophy, she has increasingly focused on promoting digital research skills in the humanities. Over the past decade, she has organized six summer schools and a hackathon dedicated to advancing these practices.