Art

Art works from our lab:

2025-10-04 - 'X-frames-per-space exhibition'

A collaborative exploration by LBI-NetMed and Vienna Contemporary Art Space into how scientific data and artistic perspectives shape our reality.

We are kicking off 2025 with a feature in the international PARNASS art magazine highlighting “X FRAMES PER SPACE – different logics, shared questions.” Organized by the Ludwig Boltzmann Institute for Network Medicine in partnership with Vienna Contemporary Art Space, this exhibition transformed 1,200 square meters into a “mycelial network” where art, science, and technology nourished one another.

The project, realized in cooperation with the LBG Open Innovation in Science Center and Vienna Art Week, featured contributions from our multidisciplinary team of experts in medicine, physics, and art. It examined the interaction between scientific data and artistic production, opening new pathways for future connections. While the exhibition has closed, the collaborative networks it sparked remain active and continue to resonate within our research.

Interdisciplinary Exhibition “X FRAMES PER SPACE” featured in PARNASS Art Magazine


Contributions by the lab

The puzzle of microanatomy, Andre Rendeiro

What are we made of? Is it the atoms in our body, the cells in our brain?

At the mesoscale (hundreds of microns to centimeter scale) our cells arrange themselves into microanatomical structures each associated with a function for the respective organ - the mucosa of the stomach provides a barrier, the muscle movement for digestion; pancreatic ducts drive digestive juices out of the pancreas; the outer layer of the coronary artery provides support and elasticity.

Finding these microanatomical domains is not easy, but that's one of our projects. In this Frame, you get to participate! What organs are represented? Which domains are connected? What is their coordinated function?


Tissue turbulence, Shrestha Srivastava

The starry night, a painting that at first glance depicts a serene village under a clear night sky, but upon looking closely we understand Van Gogh's turbulent perception of the world.

Similarly, tissues can appear deceptively normal on whole-slide imaging. Only when we zoom in do the hidden details emerge, interstitial fat deposition, narrowed or damaged vessels, inflammatory infiltrates, cellular atrophy, and other subtle changes that reflect underlying biological processes.


Read more here.


2024-11-07 - 'The 'Pile of Trash' - an AI installation at the EU-LIFE Utopia 2024 conference'

"The Pile of Trash" was an interactive AI art installation created jointly by the labs of Jörg Menche (LBI-NetMed, and CeMM) and ours for the EU-LIFE Community Meeting 2024, themed "Utopia 2024 — AI." The installation explored the role of artificial intelligence as an omnipresent, sensing entity — one that listens, reflects, and participates in the collective intellectual life of a scientific conference.

The centerpiece was a deliberately chaotic, techno-punk sculpture: a pile of approximately twelve monitors surrounded by tangled cables, exposed hardware, and LED light hoses cascading from the ceiling. At the start of the conference, the screens merely flickered — displaying scrolling code, static, and noise. But as the event progressed, the pile came alive.

AI as joint conscience

Throughout the conference, microphones captured the speakers' talks in real time. A local AI system transcribed the audio on the fly, and the meaning of the ongoing discourse was continuously extracted and transformed into two parallel outputs:

  • Ambient mood projection: The semantic content and emotional tone of the talks were translated into a subtle, evolving light- and projection-based stream cast across the room by several high-luminosity projectors. This ambient layer communicated the collective mood and thematic drift of the conference — an ever-shifting visual conscience that most participants sensed only subliminally.
  • Generative imagery on the pile: The transcribed content was progressively summarized by a large language model, which in turn drove continuous image generation. These AI-generated images — visual interpretations of the conference's intellectual content — gradually filled the monitors in the pile, replacing the initial static with a growing gallery of machine-imagined science. The pile of trash was becoming sentient.
AI as participant

The AI was not merely a passive observer. At designated moments, session chairs could prompt the installation directly, posing questions to the AI entity. Drawing on its accumulated understanding of everything said during the conference, the AI responded — sometimes in real time, sometimes pre-prompted — effectively participating in the scientific discussion as a non-human panelist emerging from a pile of electronic debris.

AI as matchmaker

The installation also had a deeply social dimension. Before the conference, participants signed up by answering three seemingly random questions. Behind the scenes, their answers were embedded into a latent space using large language models. For scientist participants, we additionally mined their publication records, creating rich vector representations from which we computed pairwise distances and identified communities of intellectual affinity.

At registration, each participant received a badge featuring their name, affiliation, and a unique QR code — a discreet identifier linking them to their position in the latent space. During a dedicated interactive session, participants approached the pile of trash to scan their badge. The installation — via an old, airport-style thermal printer, in appropriately gritty techno-punk aesthetic — printed out a personalized slip: whom they should meet, where to find them, and an AI-generated conversation topic tailored to the pair's shared (or complementary) interests.

Participants then sought each other out. As proof of interaction, they scanned each other's QR codes using a phone app, sending these pairs to a central server. Throughout the event, a dedicated screen displayed the growing network of connections in real time — a living social graph of the conference. Participants could scan additional people, expanding their local network, and the most gregarious connectors were celebrated at the end.

Behind the scenes

The project was a substantial undertaking requiring microphones, 4–6 high-luminosity projectors, local GPU hardware for real-time audio transcription and inference, QR-coded badges, a dozen monitors, thermal printers, LED installations, and a generous amount of cables and techno-punk decoration. The Rendeiro lab led the software development — real-time transcription, LLM-based summarization, latent space computation, image generation, the matchmaking algorithm, the QR-scanning app, and the network visualization server — while the Menche lab spearheaded the hardware assembly, physical design, and the atmospheric staging of the installation.

It was an enormous amount of work, and an enormous amount of fun — a genuine experiment in what happens when you let an AI not just analyze a conference, but inhabit it.

See also: EU-LIFE Community Meeting 2024 recap.

Read more here.