Client Project: Living Encyclopedia
My first major collaboration with Refik Anadol Studios for Dataland is beta access to a novel AI.
Resuming from my previous posts about Dataland1, creating physical space for digital art takes time. It has not been easy given the ambition: the world’s first museum dedicated to AI arts and data visualization. While brick and mortar takes time, digital art is native to the internet; to the surface of a screen. So, the team decided at the beginning of the year to build out experiences that capture the experience of what Dataland is aiming to achieve in person. In the lead up to the museum’s launch, audiences can learn about the systems that will drive the art. Today, I am thrilled to share one such experience that is public under beta access, the Living Encyclopedia: Large Nature Model.
You can read the official tagline and information about what this is over on the Dataland site (click the red button below). For those of you in the art and technology space, you will likely see Refik Anadol, the museum’s artistic director and cofounder, talk and share information about the Living Encyclopedia on social media. To me, the Living Encyclopedia is a compilation of data (images, text, statistics, video, lidar, etc.) Refik and his studio conserve and use to create his art. These data come from various sources and partners. And they center around nature. For the Living Encyclopedia, these data are presented in the form of part software product, part artistic gesture. The result is a state-of-the-art interactive website that lets you converse with an AI in only the way Refik Anadol could imagine. It is a rare attempt to imbue technology with the intuition of artistic practice.
There are three modes. Research Mode presents a chat interface not unlike ChatGPT with the Large Nature Model. It is surrounded by a visual of the model’s latent space and different data markers like weather, species, images, and sounds to guide the conversation in open ways. Chat Mode is a prompt box that generates photo-like images of both real and imagined species. Lastly, Dream Mode is a lean-back experience that is a fly through tour of the Large Nature Model’s latent space or “brain”.
Conceptually it ticks all the boxes of topics I am interested in and have been talking about in this newsletter over the last four years. Technically, it is the culmination of mastering the development of single-page applications. It uses contemporary build and distribution techniques in order to load millions of datapoints into your session in real-time. It renders spatial audio based on the markers in the latent space and continuously monitors memory load to run on mobile devices and desktop computers alike. The development time was not years in the making, but it has taken me my entire 20 year journey into computer programming in order to deliver the anomaly that is: part software product, part artistic gesture.
You can sign up through the link above. We will be letting in new testers throughout the coming days to preview the Living Encyclopedia before it eventually becomes part of the museum’s membership benefits.
—Jono
Soon very soon.
Not sure I understand it in it's entirety but I enjoy reading it nun the less. Thanks