The Future of Text 3 [DRAFT]

Contents

Foreword 16

by Vint Cerf 16

Welcome 18

by Frode Hegland 18

Our work in VR 19

This Book as Augmented PDF 20

Editor’s Introduction 21

Andreea Ion Cojocaru 34

Borges and Vygotsky Join Forces for BOVYG, Latest Virtual Reality Start-up   34 Abstract 34

Body 35

Author’s Notes 37

Journal Guest Presentation ‘An Architect Reads Cognitive Neuroscience and Decides to Start an Immersive Tech Company’ : 13 May 2022 38

Q&A 55

Andy Campbell 70

Dreaming Methods - Creating Immersive Literary Experiences 70

Presentation (pre-recorded for the Symposium) 71

Annie Murphy Paul 74

Operationalizing the Extended Mind 74

Apurva Chitnis 76

Journal : Public Zettelkasten 76

Limitations today 77

Public Zettelkasten 77

Implementation 78

Challenges 78

Barbara Tversky 80

Journal Guest Presentation : Mind in Motion 80

Q&A 107

Bjørn Borud 134

Time, speed and distance 134

Computers and light speed 134

Signal strength and distance 135

The Drake equation 136

Our civilization 137

Bob Horn 138

Information Murals for Virtual Reality 138

Introduction: my recent work 138

My role as synthesizer 138

Examples of Information Murals 138

Overwhelmed by complexity? 140

Why am I here at this Symposium? 141

Text as idea chunks with subheads 141

Benefits of small idea chunks with subheads 141

Transition to other offerings 142

Assumption: improve human thinking 142

What can we do to move toward Einstein’s goal? 142

Problem: Show and link context 142

Show and link context…in Multiple Dimensions 143

Problem: Show process visually 143

Problem: build solid and supportive “scaffoldings for thinking” 144

Offer of help 144

Bibliography/Further Reading 144

Bob Stein 146

Journal Guest Presentation : 4 July 2022 146

Screenshots 160

Caitlin Fisher 162

Daveed Benjamin 164

Thoughts about Metadata 164

Cynthia Haynes & Jan Rune Holmevik 166

Teleprompting Élekcriture 166

Works Cited 176

Deena Larsen 180

Access within VR: Opening the Magic Doors to All 180

Dene Grigar & Richard Snyder 184

Metadata for Access: VR and Beyond 184

Abstract 184

Introduction: Proof of Concept 184

About The NEXT’s Extended Metadata Schema 185

Applying ELMS to VR Narratives 186

Final Thoughts 188

Acknowledgements 188

Bibliography 188

Eduardo Kac 190

Space Art: My Trajectory 190

Introduction 190

Ágora: a holopoem to be sent to Andromeda 190

Spacescapes 192

Monogram 193

The Lepus Constellation Suite 195

Lagoogleglyphs 196

Inner Telescope 199

Adsum, an artwork for the Moon 201

Conclusion 202

Fabien Benetou 204

Why PDF is the wrong format to bring text to XR and why the Web with proper provenance and responsive design from stylesheets is all we need        204

Fabien Benetou 208

The Case Against Books 208

Fabien Benetou 212

Interfaces all the way down 212

Fabien Benetou 214

Stigmergy Across Media 214

Fabien Benetou 216

Journal : Utopiah/visual-meta-append-remote.js 216

Frode Hegland 220

The state of my text art + the journey to VR 220

State of the my art 221

Editing 222

Research 224

Making it happen 225

Frode Hegland 226

The case for books 226

Robustness 226

Book Bindings 226

Digital Bindings 226

Future Books 227

Frode Hegland 228

‘Just’ more displays? 228

Stepping out 230

Size matters 230

Frode Hegland 234

Page to Page Navigation 234

Frode Hegland 236

Journal : Academic & Scientific Documents in the Metaverse 236

Jack Kausch 240

Why We Need a Semantic Writing System 240

Jad Esber 244

Monthly Guest Presentation : 21 February 2022 244

Dialogue 248

Closing Comments 270

Gavin Menichini 272

Journal Guest Product Presentation : 25 February 2022 272

Chat Log 299

Harold Thimbleby 302

Getting mixed text right is the future of text 302

The author’s experience of text 302

Interesting aside… 307

Mixed texts in single systems 307

Future text mixed with AI and … 309

Conclusions 311

Jamie Joyce 314

Guest Presentation : The Society Library 314

Dialogue 326

Jaron Lanier 354

Keynote 354

Q&A 360

Jim Strahorn 366

The Future of ... More Readable Books ... a Reader Point of View 366

The Problem 366

Objectives 367

Conclusions 371

Jonathan Finn 372

2D vs 3D displays in virtual worlds 372

Conclusion 373

Kalev Leetaru 374

[to be confirmed] 374

Ken Perlin 376

Closing Keynote: Experiential Computing and the Future of Text 376

Presentation 376

Q&A 389

Livia Polanyi 394

Virtual Vision 394

Lorenzo Bernaschina 396

Gems 396

Mark Anderson 400

Image Maps and VR: not as simple as supposed 400

Abstract 400

Background 400

The Problem Space 400

Display in 2D and bitmap (raster) vs. vector formats 401

The (HTML) Image Map 401

Raster vs. Vector Data 402

Issues for Presentation of Infographics in VR 403

Displaying image data in VR 403

All surfaces are not web displays 403

What is to be linked and where will the linked resource be found?       403 Legacy Files—re-mediating pre-existing resources 404

Current files—content designed for combined 2D/3D use 405

The nature of VR interaction 406

Tool support for linking and re-mediation 406

Conclusion 407

Mez Breeze 408

Artificial Intelligence Art Generation Using Text Prompts 408

Beginnings 408

The Stage 409

The Lowdown 410

The Impact[s] 411

The Rules 412

Conclusions 412

Michael Roberts 414

Metaverse Combinators: digital tool strategies for the 2020’s and beyond   414

Introduction 414

Programming using node-based languages 414

Combinatorial thinking 415

Meta tools 416

Information Hiding 417

Hyperparameters 417

Machine learning approaches 418

Moving forwards together 419

Conclusion 420

Omar Rizwan 422

Journal : Against ‘text’ 422

Patrick Lichty 428

Architectures of the Latent Space 428

Context 428

Content 429

Phil Gooch 432

Guest Product Presentation : Scholarcy 432

Dialogue 436

Peter Wasilko 454

Benediktine Cyberspace Revisited 454

Wexelblat’s Taxonomy of Dimensions 456

Linnear Dimensions 456

Ray Dimensions 456

Quantum Dimensions 456

Nominal Dimensions 456

Ordinal Dimensions 457

Functional Dimensions 457

Visualizing, Editing, and Navigating Benediktine Cyberspaces 458

Visualization 458

Editing 458

Navigation 459

Comparing Objects 459

The DataProbe HUD — An Additional Possiblity in VR 460

Future Work 461

Peter Wasilko 462

Putting It All Together 462

Future VR Systems Should Embody The Elements of Programming      462 Requisite Affordances for Productive Work in VR 462

The VR Pane 463

The Transcript Pane 463

The Command Line Interface Pane 464

Viewspecs 464

What Can We Specify with Viewspecs? 465

Examples of Driving Complex Visualizations with a Command Line Viewspec Domain Specific Language (DSL) 465

UI Support for Discovery of the Viewspec DSL 466

The Gestalt We Are Aiming At 466

Bibliography 466

Pol Baladas & Gerard Serra 468

There are two great points to be shared after our practical explorations:    468

Sam Brooker 470

Supplementary Material: Devaluing the Work and Elevating the Worker    470

Scott Rettberg 474

Cyborg Authorship: Humans Writing with AI 474

Timur Schukin 476

Multidimensional 476

Yiliu Shen-Burke 490

Introducing Softspace 478

  1. Introduction 478

  2. Design 479

  3. User 487

  4. Flow 488

Yiliu Shen-Burke 490

Journal Guest Presentation : Discussing Softspace 490

Yohanna Joseph Waliya 518

Post Digital Text (PDT) in Virtual Reality (VR) 518

Graffiti Wall on the Future of Text in VR 520

Tom Standage 520

Martin Tiefenthaler 520

Ken Perlin 520

Bernard Vatant 521

Anne-Laure Le Cunff 521

Stephan Kreutzer 522

Phil Gooch 522

Stephanie Strickland 523

David Lebow 523

Jim Strahorn 523

Esther Wojcicki 523

Barbara Tversky 523

Michael Joyce 524

Denise Schmandt-Besserat 524

Cynthia Haynes 524

David Jay Bolter 524

Johannah Rodgers 525

Graffiti Wall on the Future of Text in VR from Twitter 526

Nova 526

Noda - Mind Map in VR 526

Jimmy Six-DOF 527

Kezza 527

Conversations from the Journal 528

Conversation: Adam’s Experiment 528

Conversation: Experiments with Bob Horn Mural 533

Brandel’s Mural 534

Adam Mural with Extracted Dates 535

Conversation: USD (Universal Scene Description) 536

Stephen Fry 538

In closing: A Prediction 538

Appendix : History of Text Timeline 540

13,8 Billion Years Ago 541

250 Million-3,6 Million 541

2,000,000-50,000 BCE 542

50,000-3,000 BCE 542

4000 BCE 543

3000 BCE 543

2000 BCE 544

1000 BCE 545

0 CE 547

100 CE 547

200                                            547

300                                            547

400                                            547

500                                            548

600                                            548

700                                            548

800                                            548

900                                            549

1000                                           549

1100                                           549

1200                                           549

1300                                           549

1400                                           550

1500                                           551

1600                                           552

1700                                           552

1800                                           554

1810                                          554

1820                                          554

1830                                          555

1840                                          555

1850                                          555

1860                                          555

1870                                          556

1880                                          556

1890                                          557

1900                                           557

1910                                          558

1920                                          558

1930                                          559

1940                                          559

1950                                          560

1960                                          562

1970                                          565

1980                                          568

1990                                          573

2000                                           578

2010                                          580

2020                                          582

Future 583

Contributors to the Timeline 583

Gallery from the Symposium 584

Glossary 593

Endnotes 636

References 643

Visual-Meta Appendix 649

The Future of Text Volume |||

december 9th 2022

All articles are © Copyright 2022 of their respective authors.

This collected work is © Copyright 2022‘Future Text Publishing’ & Frode Alexander Hegland.

Dedicated to Turid Hegland.

A PDF is made available at no cost and the printed book is available from ‘Future Text Publishing’ (futuretextpublishing.com) a trading name of ‘The Augmented Text Company LTD, UK.

This work is freely available digitally, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

image

Foreword

by Vint Cerf

For nearly a decade, the Future of Text group has focused on interactions with text as largely a two dimensional construct. The interactions allowed for varied 2D presentations and manipulations: text as a graph, text with appendices for citation and for glossaries, text filtered in various ways. In the past year, the exploration of computational text has taken on a literal new dimension: 3D presentation and manipulation. One can imagine text as books to be manipulated as 3D objects. One can also imagine text presented as connected components in a 3D space, allowing for richer organization of context for purposes of authoring, annotation or reading. The additional dimension opens up a richer environment in which to store, explore, consume and create text and other artifacts including 3D illustrations and simulated objects. One can literally imagine computable containers as a part of the “text” universe. Active objects that can auto-update and signal their status in a 3D environment. Some of these ideas are not new. The Defense Advanced Research Projects Agency (DARPA) funded a project called a Spatial Database Management System at the MIT Media Lab in which content was found in simulated filing cabinets arranged in a 3D space. One “flew” through the information space to explore its contents. What is new is the development of high resolution 3D headsets that have sufficiently high resolution and sensing capability so as to eliminate earlier proprioceptive confusion that led to dizziness and even nausea with extended use.

The virtual environment these devices create permit convenient manipulation of

artefacts as if they existed in real space. One of the most powerful organizing principles humans exhibit is spatial member. We know where papers are that are piled up on our desks (“about three inches from the top…”). VR environments not only exercise this facility but also allow compelling renderings of information, for example, highlighting relevant text objects in response to a search. Imagine walking in the “stacks” in a virtual library and having books light up because they have relevant information responsive to your search. One could assemble a virtual library of books (and other text artefacts) from online resources for purposes of preparing to engage in a research project. Could we call this an information workbench or machine shop? Because of the endless possibilities for rendering in virtual three-space, there seem to be few limits to a textual “holodeck” in which multiple parties might collaborate.

We are at a cusp enabled by new technology and techniques. The information landscape is open for exploration.

image

Figure 1. Vint Cerf @ The 11th Future of Text Symposium. Hegland, 2022.

Welcome

by Frode Hegland

Along with Vint Cerf, Ismail Serageldin, Dene Grigar, Claus Atzenbeck and Mark Anderson I welcome you to ‘The Future of Text’ Volume 3, where we focus primarily on text in virtual environments (VR/AR) and text augmented by AI. In other words, text in 3D and text in latent space. This volume of The Future of Text includes:

You can read more about what Visual-Meta brings to metadata here: visual-meta.info This work will also be made available in other formats, including .liquid and JSON for the purposes of developing text interactions, please get in touch if you would like any of these formats. Reader can be downloaded for free here: https://apps.apple.com/gb/app/reader/id1179373118

Editor’s Introduction

VR (including AR) is about to go mainstream and this has the potential to offer tremendous improvements to how we think, work and communicate.

There are serious issues around how open VR work environments will be and how portable knowledge objects and environments will be. Think Mac VS. PC and the Web Browser Wars but for the entire work environment.

The potential of text augmented with AI is also only now beginning to be understood to improve the lives of individual users, though it has been used in various guises and under different names (ML, algorithms, etc.) to power fantastic services (speech understanding, speech synthesis, language translation and more), as well as social networks and ‘fake news’ for years.

More important than the specific benefits working in VR will have, is perhaps the opportunity we now have to reset our thinking and return to first principles to better understand how we can think and communicate with digital text. Douglas Engelbart, Ted Nelson and other pioneers led a ‘Cambrian Explosion’ of innovation for how we can interact with digital text in the 60s and 70s by giving us digital editing, hypertext-links and so on. But once we, the public, felt we knew what digital text was (text which can be edited, shared and linked), innovation slowed to a crawl. The hypertext community, as represented by ACM Hypertext, has demonstrated powerful ways we can interact with text, far beyond what is in general use. Still, the inertia of what exists and the lack of curiosity among users have made it prohibitively expensive to develop and put into use new systems.

With the advent of VR, where text will be freed from the small rectangles of traditional environments, we can again wonder about the possibilities. This will unleash public curiosity as to what text can be once again.

To truly unleash text in VR we will need to re-examine what text is, what infrastructures support textual dialogue and what we want text to do for us. The excitement of VR fuels our imagination again – just think of working in a library, where every wall can instantly display different aspects of what you are reading, having the outlines, glossary definitions and images from the book framed on the wall, all the while being interactive for you to change the variables in diagrams and see connections with cited sources. This could be inspiring or distracting but the key is you can change it at a whim.

This is an incredibly exciting future once headsets get better (lighter, more comfortable, as well as better visual quality). Because this cannot happen without

fundamental infrastructure improvements, what we build for virtual environments–VR–will benefit text in all digital forms. This is important.

The future of humanity will depend on how we can improve how we think and communicate and the written word, with all its unique characteristics of being swimmable, readable at your own pace and so on, will remain a key to this. The future of text we choose will choose how our future will be written.

Why VR, Why Now?

My starting position is that VR, sometimes also called ‘metaverse’ this days and ‘cyberspace’ before, is about to go mainstream.

This is based on Meta Quest 2, which is available for the mass market and currently out-selling the Microsoft Xbox game consoles. It is just the start of what VR headsets will be able to offer. The view inside such a headset is already rock-solid, whatever environment is present, it looks like it is there, right in front of you. With Apple’s headset coming next year and improvements coming along as we have seen with personal computers, smartphones and smartwatches, this will rapidly continue to improve to the point where the visual fidelity becomes high and the discomfort low.

The future is coming fast. It is worth emphasising that in the same way the room sized computer was not really a clear precursor to the smartphone, the current bulky, low resolution and narrow field of view devices does not illustrate what in the near future will feel lightweight and the visual quality will approach photo realism–it will feel like the world is transformed–it will not feel like we are wearing a heavy headset.

What this will unleash we do not know, but what I do know is that we, as a wider community of authors and readers of text, need to get involved in thinking about–dreaming and fantasising–about what it can be. For starters, we will not be using headsets all the time, any more than we now only ever use a smartphone or a desktop/laptop. We will enter VR when we need to focus on something, similar to how we enter a movie theatre, or turn on a large, flat screen TV when we want to be immersed or watch general video ‘content’ on all our devices.

The distinction between VR and AR will likely become different modes on the same device but will have very different uses. Where AR refers to the world, VR will refer to any world. There is also an interesting middle ground, where the view of the world is superfluous, and it is just there for a sense of place, where the knowledge objects being interacted with are in a space, and the background could be anywhere. This is demonstrated in Yiliu Shen-

Burke’s work where the user can interact with a constellation of knowledge, and the background is simply a background, even though it is a live video of the user's room. There is also what is referred to as ‘reverse AR’ where the whole room environment is synthetic but the main object in the room is real, as built by the team at Shopify to let shoppers try a chair and then look at the room as though they are at home c . There is a lot of creativity as to where boundaries will be and it will only become more and more interesting.

We had a historic opportunity to re-think text in the 1960s, and now we have another. This is a once in a lifetime, once in a species point in time. We are only a few years away–if that–from VR headsets becoming commonplace. The dreams of Doug Engelbart and Ted Nelson, among other true pioneers, have not had a place to put their feet over the last few decades. There has not been a foundation of need for improved text interaction from people. Now there is. With VR, it’s easier to see that there are new ways of working. Quite simply, we have an opportunity to dream again. ‘VR’ won’t be ‘VR’ for long, same as ‘hypertext’ became the web then became just ‘online. ‘VR’ will become ordinary very soon.

Why AI, Why Now?

The further assumption is that AI will continue to advance. We are looking at is the emergence and improvement in automatic pattern recognition, classification, summarization, extrapolation, and natural language query-based information extraction for everything from speech to text and text analysis. We are also keeping an eye on the development of Self- Aware Artificial General Intelligence with a mixed-initiative conversational UI, since it never hurts to dream far into the future.

AI, if left unchecked, can present real dangers for society, as seen already in the basic AI algorithms which shape social media interactions and more.

AI can expand our understanding of creative expression. In this volume we have the experience of Mez Breeze who explores the art of AI and associated text-driven potentials†.

One useful way to think of AI is as a digital map. I came to think of this when my 5 year old son started navigating for us when driving in Norway this summer. Since the map was not un-augmented paper but a digital map on an iPhone, he was helped by always knowing our location and there was always a blue line suggesting where we should go, so he could tell me ‘right’, ‘left’ and what exit to take off a roundabout, in his youthful happy voice. The map did not dictate where we went, we could always choose a more scenic route if we felt like it, and the blue line would update its suggestions.

More than anything, AI has been largely ignored when it comes to text. The Apple

Watch I use I can rely on to accurately understand my commands, which is quite mind- blowing. I have refined speech to text in my macOS word processor ‘Author’ to take advantage of Apple’s increasingly powerful API. Some software provides coloured grammar when required and some suggest changes to writing style. There are of course relatively brute force AI analysis of masses of academic documents and there are writing tools which will write based on supplied text, such as GPT-3, but I suspect, this is really just the snowflake on the top of the iceberg of what is possible.

What live analysis can a knowledge worker hope for when writing? How about hitting cmd-? and getting a list of suggested next paragraphs (not the less-then-helpful-help-menu). Maybe there are a few suggestions, one based on what the author has typed so far and the author’s own body of work, one based on what’s typed so far but including all known documents in the author’s field and a third maybe also including what’s found on the web? This is the digital map approach, giving the user guidance, but not dictating. This is work currently undertaken by Pol Baladas on Fermat, for example.

AI is both ‘just beyond the horizon’ and also becoming mundane so it is valuable to try

to understand, then to revise our understanding, of how AI can augment our interactions with text.

The Future of Us, The Future of Text

2022 is the year of a continuing pandemic, along with economic collapse, inequality, a significant war in Europe which threatens the stability of countries near and distant, as well as the underlying climate change catastrophe we are now seeing starting to make an impact on our daily lives.

There is no question that if we are to survive, let alone thrive as a species, we need to improve the way we communicate and relate to each other. This will mean looking at how we can improve education, politics, scientific discourse and even how we can bring our spiritual practices into play to improve, quite simply, how we get along as people, how we develop shared goals and how we deal with conflict.

Much of dialogue, from politics, law and international treaties, to social media, lab reports, journal articles and personal chat, is in the form of text. I believe that we have to improve how we interact with textual knowledge, otherwise, we will be manipulated by those who do, such as social media companies and we will continue to be overwhelmed by the sheer volume of information. We cannot rely on face-to-face speech and video alone. We have to improve what text is, how we can interact with text and how we can represent text.

From its invention almost five and a half thousand years ago, the written word has

proven remarkably powerful in augmenting how we think and communicate. The transition to digital text has transformed text, a medium which before becoming digital was primarily about fixity, about thoughts being securely placed on a substrate. When text became digital, this attribute largely vanished, with text now being interactive. A user could easily delete any text, cut & copy and edit the text freely, giving text a much more fluid character.

What was initially a revolution with the editabillity, and soon after the linkability of text became part of our daily lives, the magic of what was previously referred to as ‘hypertext’ simply became ‘text’ and analog text, previously only referred to as ‘text’ became ‘print out’ or ‘hard copy.’ The magic of digital text became mundane.

Other digital media continued to develop however. This was all the while digital images went from wireframes to photorealistic and games went from abstract ‘asteroids’ to deeply immersive and interactive experiences. We collectively thought we knew what text was, and little innovation took place. However, as digital text proliferated at an astounding pace, overwhelming those trying to stay on top of research, social media companies and those seeking to influence popular and political opinion went to work creating powerful tools for textual persuasion. We got social media echo chambers with algorithms designed to provoke, to increase ‘engagement’ (and thus ad views resulting in greater revenue) and modern ‘fake news’ at the start of the war in Ukraine in 2014, when Russian intelligence flooded digital mass media and social networks with fake and real news to the point where it became difficult to discern what was actually going on. Fake news continued to influence people’s opinions at the same time as research documentation stayed hardly digital, with little interactions afforded to the user. There are many issues to be discussed in the paragraph and I’d be very happy to go through them in person, but the point is simple: Text interactions became sophisticated where there was an incentive to invest in it in the form of money and political control. Where the greatest benefit to the end user could have been seen, there has been little innovation or investment.

We had a historic opportunity to re-think text in digital form but we dropped the ball. We don’t have the ability to ‘fly through cyberspace.’ We have the ability to cut and paste in Word, click on one-way, one-destination, un-typed links and edit a document together in Google Docs. We could do more, much more. We could imbue all documents with rich and robust metadata. This is a personal issue for me. We could provide authoring and reading software as powerful as Apple Final Cut. We could have reached for the stars, but the market and the few companies making text-focused software decided on ‘ease of use,’ and we were left with big buttons to click on.

Improving not only VR Text or AI Text, but ALL Text

It is important to point out that the opportunity is not just about working in VR or using AI augmented text.

The real opportunity is that we will have an opportunity to rethink everything with digital text because the public’s imagination will be energised–all text can benefit from a re- think and new dreaming.

It is clear that while text in documents will continue to matter, it will not just be text ‘floating in space’.

It is also clear that better metadata will make text more usefully interactive on traditional digital displays as well.

This is a historic opportunity primarily because we can restart and think from first principles: how to connect people and how to help us think with symbols/text. Our planet and our species is facing serious threats so it is important that we learn from the past and that we are not shackled by the past.

We need to look at how we can usefully extend our cognition to better think with other minds, as Annie Murphy Paul discusses in her book The Extended Mind [2] and in her talk in this book. Jaron Lanier–the man who embodies VR– and who presented the keynote at the Future of Text Symposium puts it ‘The solution is to double down on being humand.’.

The solution is at the same time to extend our mental faculties to really take advantage of the flexibility of representation and interaction these future environments will offer us. Just as we are today hamstrung by being tied to the models of paper documents, we must expand our minds in entirely new ways to get the most benefit out of what can now be created. This will mean building systems which connect with our physiology to learn to ‘read’ and ‘write’ in entirely new ways. Think how text seems entirely artificial if you take a human’s situation 100,000 years ago, but it seems natural today. Text is only lines on a substrate. What will be the future of text when the entire visual, aural–and soon haptic–field can be used for expression and impression?

What does it mean to be In VR?

Virtual environments will feel more like rooms or full environments than what we think of as textual ‘documents’ today. There will be intricate models of microscopic creatures for us to explore, we will be able to walk through cities; ancient, modern and futuristic. We will also be able to step into space ships and explore entire planets and more. This will be exciting, and valuable and it will take teams of people a serious investment in time, energy and money to

build these experiences. A great example is the work of Bob Horn who extends murals into multiple dimensions which at first glance is just an image shown large in VR but on further interactions becomes so much more than it could have been if it was simply printed onto a wall.

We will also have new ways of telling stories, as Caitlin Fisher who works on the opportunities for more immersive storytelling in VR† discusses in this book. The opportunities are vast for what we can be in virtual environments but for this book and this project we are looking at text primarily, which will include many types of packages and experiences, one of which will remain a kind of book.

Documents in VR

One of the key questions we ask is: What is a document in virtual reality, and more specifically, what is an academic document in VR and what does it become with AI augmentations?

We look at academic documents as a special case since academia is a field connected by documents and it is also a field where what is in the documents needs to interacted with and connected.

This is distinct from commercial books where the owners of the intellectual property have reason to restrain the use of the text and is therefore a different strand of the future of text, one with constraints outside of what we are currently looking for. We are, by the nature of trying to look into the future and wish what might be to augment how we think and communicate, dreamers, and as such our playground is information which is free to a large extent.

There are limits to online-only documents which are worth noting, since it is easy to consider virtual environments to be online. The first is addressability and the second is reliability. Imagine if you could only get a book at the library by knowing it’s location, as in its entry in the Dewey Decimal Classification system–and not by the title of the book or the author’s name(s). This is effectively what web locations are; you can locate information based on location, not by content or metadata. Academic citations, which simply presents the documents metadata, such as title, author(s) names and date of publication do not tell you where you can locate the document, but what information you need to locate it in many types of places, such as libraries and book shops. The second limitation is reliability based on the DNS (web domain system) where the documents cease to be available if there is non- payment of the DNS fees or if there is any technical issue with a specific server or set of servers. Many people exists in a tine sliver time, a few years before ‘now’ and with a few

vague prods into the future to have an idea of their career advancement, prospective new home, the lives of their children and so on. Academics have to live in much longer timespans, almost no matter what field of study. Their research will include ‘up-to-the-minute’ knowledge but also access to what’s behind it. Similarly, academic have a duty to the future to make their work available long after they are gone.

Documents for virtual environments can draw on previous types of documents and extend them. There is no reason why they should not have the option to be primarily text but still have a spoken presenter available if the reader would like to hear a perspective. There is also no reason why they should not be compressible into a portable document form like we have today. In this volume of The Future of Text, we can see how Bob Stein looks at the book’s essence in digitally empowered form and extends large collections of knowledge.

Metadata Matters

The more we look at how to realise the incredible potential of text in VR and text augmented by AI, the more it becomes clear that better† metadata which is needed to make it happen.

It is better metadata which augments AI to be able to make better analysis.

It is better metadata which makes text in virtual environments flexibly interactive.

Metadata is the data which makes data useful. A basic example is a document which can, but in practice in 2022 hardly ever does, contain embedded, or hidden, metadata to make the name of the author(s), the title and publication date known.

Visual-Meta, developed as part of the Future of Text Initiative (and which is also my PhD thesis result) includes this in the appendix in as simple a way as ‘author = {Name of Author}’ ‘title = {Name of Document}’ ‘month = {September}’ ‘year = {2022}’. This ‘self- citation’ metadata is what makes it possible to automatically cite the document, through a simple copy and paste, and to see it in a network of other documents, where the metadata is in the document itself and not a separated database.

Visual-Meta is my approach to rich, flexible and robust metadata and I highlight it to highlight the issue of metadata, it is quite clear that much work needs to be done beyond what Visual-Meta enables.

All the multimedia objects are included in this so that they are flattened into 3D when published as a document and can be re-invigorated with all dimensions when viewed in VR. This includes spatial information of how the document should be be shown, by default, in VR 3D space. It also includes all the chart information and image map data. Including image map data in the metadata in this way means that a document can contain a huge mural, shrunk down to a double page spread in the document, but then it can be viewed wall size, with all

data and links intact, at will.

Since Visual-Meta was developed as my PhD thesis, I find I need to come to its defence and specify that adding the Visual-Meta appendix to documents is completely effortless for the author when the system supports it. What is put into the Visual-Meta is usually metadata which the authoring software is aware of, such as headings, glossaries and glossary terms, references, and chart and graph information, but this is currently discarded on export/ publishing. Visual-Meta simply keeps it and makes it accessible.

Reading documents with rich metadata included, and working with the documents to produce new knowledge, is more flexible and robust: You can choose what to view and you do not need to worry about transcription errors or data loss.

Scale of Change

Having considered some of the scenarios and aspects of working in a virtual environment I hope you might agree that the difference between a laptop screen and working in VR will be as large as looking at the world through a small picture frame and putting the frame down and looking at the world fully and richly. Personally, I think that, after a while, it will effectively be bigger than going from analog to digital, but only time will tell. It will be something new and it will be a fundamental part of our lives. “VR will never be the same as physical reality… We'll just live life across multiple realities. Each with their own physics, bodies & affordances.” says Andreea Ion Cojocarue.

Concerns

Some of the wonderful potentials above seem almost pre-ordained. But it is not. The only thing pre-ordained is that large companies will invest masses of resources to own this new environment to create highly profitable cashflows, as this should be. Issues around the use of VR, such as how walking around virtually can produce a feeling of nausea for some, but if you instead pull objects, such as a massive wall-sized mural towards you with a gesture (such as pinch and pull) you will feel fine, even though visually it is the same impressions to your eyes. These usability issues is most certainly important and that is why they are being looked at by the companies building the VR environments. What the are not focusing on is ownership and transferability:

Ownership & Transferability

Considering that what is happening is the creation of a whole new world, it is probably not a

great idea for a few huge companies to own all of it. We need an ‘Internet’ for VR. We need open standards so that our information stays free for use, to use as we see fit, and not trapped in a corporately owned framework, as happened with the Microsoft Office formats, for example.

A simple dream would be to work on something on a traditional device, like a laptop, and to be able to don a headset, and take that information out of the screen and into the VR environment. But how can the VR environment know what is on your laptop's screen and how could any changes be communicated back?

Questions we need to ask include: What would happen if the document/knowledge object you worked with in one VR room, where you gave it fancy interactions and powerful views, simply won’t render correctly in another room when you try to share it with colleagues. It could also happen that we repeat the mistakes of digital text over the last decade and have shiny and involving social media text but little to interact with it to help us think, only share. We will need open, accessible and robust infrastructures to allow the VR world to flourish.

What We Are Doing

To help realise the potential of richly interactive text in a virtual and traditional environment, text which is directly manipulable and which can be interacted with through AI systems, we are doing the following:

augmentedtext.info

We also experiment with VR environments, where what we learn from experience continues to surprise us. On the positive side, it is impressive how stable the environments are, much more than we expected–when putting on the headset (we primarily use the Meta Quest 2), whatever environment we go into, it feels like we are really there, it does not wobble or feel off at all. There are small surprises which we need to take into consideration. For example, pulling a large mural towards you gives people significantly less motion sickness than if they simulate walking up to it, even though the visual display is practically identical. Furthermore, having lines in space to show relationships is quite annoying outside of very specific use- cases, as it feels almost like physical strings have been placed in your space. Similarly, text floating in space without a background can easily become very hard to read. Furniture is also an interesting issue since most people don’t have ‘VR Only’ rooms. Therefore the desk, chairs and other furniture must be taken into account when designing virtual rooms where the user can stand up and move.

The Bottom Line

At the end of the day I am asking you, if you are ‘sold’ on the idea that VR, or the ‘metaverse’ will become mainstream over the next few years, to consider what this truly could be to help us think and communicate, to help us work and learn–as well as how you can help inspire others to ask the same questions. Then I ask you to consider how we can keep this environment open and not as a series of corporate workrooms isolated from each other and the rest of our information.

The Invitation

In publishing this I am inviting you to join us in dialog about what text can and should be in an environment where text can be pretty much anything our imagination points to and implementation allows.

The Dream

The imagining and dreaming needed will be huge. It is exceptionally difficult to see and dream beyond a linear extrapolation of what we experience. We, therefore, need to support those who have the capacity to dream, in the spirit of Doug Engelbart, and foster dialogue for a broader community to dream together, and not simply fantasise, at a cartoon level, on a magic text which has no bearing on implementation. By this I mean purely shifting the act of reading and writing to artificial systems to somehow do the work for us. We need to augment ourselves, both through removing unnecessary hurdles and reducing clerical work, such as

the huge amount of effort placed on the cosmetic aspect of citations and formatting for journal articles.

The infrastructure to support the dreamt-up futures will need to be radically better than what we have now IF we want to have an open future for how we can interact with our knowledge and each other through the medium of the written word. The substrate of text used to be a plain material, such as paper or parchment, but now it is not the screen but everything behind the screen; the storage of the type, the metadata which makes the type useful and the means through which this can be shared openly and stored robustly.

Andreea Ion Cojocaru

Borges and Vygotsky Join Forces for BOVYG, Latest Virtual Reality Start- up

image

Figure 2. The Future of Text. Cojocaru, 2022.

Abstract

Can virtual reality reinvent text, revamp human communication, and chart a new course for us all? If there was ever any hope, it is in BOVYG. Investors are flooding in the seed round of this promising venture. The Guardian obtained a transcript of a private work session between Borges and Vygotsky. The discussion, centered on the process of concept formation and the mechanism through which words reflect reality, implies nothing short of a brand-new ontology.

For readers unfamiliar with the work of these two giants, we recommend at least a cursory reading of Borges’s Tlön, Uqbar, Orbis Tertius and Vygotsky’s Thought and Language before reading the transcript.

Body

[This transcript is based on a video recording. The capture is from BOVYG, a VR application Borges and Vygotsky are developing. The headset recording is Borges’s. We are not sharing this in a video format because the visuals are quite uninteresting. The entire conversation happens in what appears to be an empty scene with a white virtual box in the center.]

Borges: Vyg, this thing – do you see it? What’s this?

Vygotsky: It’s a box, B. I just put it there. The word “chair” is written on it because it’s supposed to represent the word and the concept. Let’s start with the simple stuff today, for a change.

Borges: Vyg, okay, but why are we starting with the end? This VR stuff is supposed to be a brand-new start. That’s the whole point, isn’t it?

Vygotsky: Of course. So we start with word and concept, then we work our way backwards, then, hopefully, forwards, and we see how things play out in here. We keep an eye out for different turns in the concept formation process.

Borges: Vyg, please. Look at this box and at this word on it. We are at THE END of the concept formation process. The process that got us into this mess to begin with! The world is simply not a grouping of objects in space. It is a heterogeneous series of independent acts. The world is successive and temporal. Idealized objects like “chair” should not be relied on. There shouldn’t be any fixed concepts to begin with. Instead, everything should be invoked and dissolved momentarily, according to necessity.

Vygotsky: B, sometimes I think that this predilection of yours towards subjective idealism is taking worrisome turns. Yes, Tlön, Uqbar, Orbis Tertius was brilliant, and you got them all wrapped around your finger. But this is serious work! We are not here to write another five- pager on magical realism. In virtual form, but this IS reality. More than that, this is the FUTURE of reality! Humans master themselves FROM THE OUTSIDE! The development of thinking is from the social to the individual. People first receive language which leads to communicable concepts and world views. Language and world formation rely on stable concepts, not fleeting impressions that “dissolve”!

Borges: Vyg, what language do you see in here? This box with letters on it? What do these letters mean in here? Where is the chair? Can we sit on it? It is leather? Do we sit on it by moving out butts downwards or perhaps upwards?

Vygotsky: What’s your point, B? Just get to the point!

Borges: Vyg, there are no objects or concepts, at least no permanent ones. Not in physical reality, and definitely not in here. Is a dog seen from the side and then seen from the back the

same dog? Only if you rely on thinking processes that manipulate objects called “dogs”! Only if you need to – pointlessly if you ask me – extend existence and identity beyond the current moment and into some weird – and dangerous! – permanence. It’s all made up, Vyg, it really is…! And, in here, the lie is outright unbearable!

Vygotsky: What do you mean “in here”? What is so different “in here”?

Borges: Everything! Let’s take this box. Look at it from the side and look at it from the back. Is it the same box?

Vygotsky: Hmmm…

Borges: No! Of course not! Every second, this box is exactly 90 boxes!

Vygotsky: B, don’t go all techie on me. The only thing that matters is that we think this is the same box. Permanence and identity are necessary NOT fundamental.

Borges: What are they necessary for, Vyg?

Vygotsky: We need them to generalize, of course! We think by using concepts, encapsulated into words. Think of words as tools. That is how we can build thoughts on top of thoughts, using both our own words and those of other people.

Borges: Vyg, you are describing the labyrinth of abstractions we need to break out of! We are here to design the process that breaks us OUT of it!

Vygotsky: The labyrinth IS the process, B… Perhaps we can shift towards new ways of building the labyrinth, but we cannot exit it. There is nothing beyond it… Our functioning as human beings relies on this clear framework. You can call it a labyrinth if you wish.

Borges: This framework of yours, Vyg, is clear. Terribly clear. That’s precisely the problem. You forget that we are both Theseus AND the minotaur. As thought become verbal and speech becomes intellectual, as you so often like to say, we both trap and chase ourselves inside it. [Sighs for a while.] Let’s run this scenario with this box of yours in here.

Vygotsky: Which box?

Borges: This one, over here, with “chair” on it. Vygotsky: From which side?

Borges: From this side! Vygotsky: Now?

Borges: No, when I said it a second ago! Or… yes… now as well! Vygotsky: From which side?

Borges: This! Vygotsky: Now? Borges: Now?!

…….

[We pause our transcript here. This almost monosyllabic conversation about the virtual box continues for another hour. Then they break for lunch. When they return, the conversation continues to be monosyllabic although a clear change in tone indicates that they are now past the disagreement related to the box. Our best explanation for this change in communication is that, similar to a process often described by Tolstoy, the closeness between the two, in combination with the strange affordances of the virtual medium, has enabled them to abbreviate their communication to the point where it is incomprehensible to the rest of us.]

Author’s Notes

Lev Vygotsky (1896 – 1934) was a Russian cognitive scientist, psychologist, constructivist and critical realist whose work focused on the internal mental structure of an individual. Methodologically, he focused on relationships, processes and levels of analysis. He is best known for sociocultural theory, a developmental school of thought focused on the relationship between thought and language as independent and dynamic processes in ontogenesis, phylogenesis, and within a cultural context. This dialogue speculates on Vygotsky’s position regarding language and virtual reality based on his book Thought and Language.

Jorge Luis Borges (1899 – 1986) was an Argentinian writer, essayist and translator known for his trademark themes: dreams, labyrinths, libraries, language and mythology. His stories, non-linear narratives that mix fact, fantasy, hox and forgery, are generally considered to have reinvented modern literature. This dialogue speculates on Borges’s position regarding language and virtual reality based on his short stories Tlön, Uqbar, Orbis Tertius and Funes, the Memorious. Moreover, the entire conversation makes use of many of Borges’s literary techniques. Most of the time I stay close to what the main characters could have plausibly said in such a situation, but, like Borges in his own stories, I also diverge from that and use the two characters to purse my own arguments. Hinted at by the fact that the footage was recorded in Borges’s headset, this is the kind of thing he would write.

Journal Guest Presentation ‘An Architect Reads Cognitive Neuroscience and Decides to Start an Immersive Tech Company’ : 13 May 2022

https://youtu.be/4YO-iCUHdog?t=678

Andreea Ion Cojocaru: Hi everyone. It's such an honour to be part of the group, and present to this group. Because this group is very different than the usual audiences that I speak to, I took the presentation in a very new direction. It's a bit of a risk in that I’m going much deeper than I’ve ever gone before in public in showing people the insides of how my method works. So part of what you will hear will be the messiness of what is a very active and sometimes stressful process for us at Numena. But hopefully, yes, there will be time at the end for you to ask questions, and for me to have the chance to clarify the aspects that were maybe a bit too unclear. Okay, with that mentioned I’m going to share my screen. All right. I just gave a title to this talk. This talk did not have a title until five minutes ago, and now it's called An Architect Reads Cognitive Neuroscience and Decides to Start an Immersive Tech Company. And this is pretty much what the story will be today.

I’m an architect. I have a master's degree in architecture. I’ve been in love with architecture and the idea of space-making for as long as I can remember. But there's a bit of a twist in my background in that, when I was young, I was learning letters by typing with my dad on a keyboard in the 80s, and I have this childhood relationship with computers and coding. And I’ve always been very passionate about philosophy. So a while back I discovered cognitive neuroscience and I began reading that from the perspective of an architect who can code and who is also an amateur philosopher. Reading this from this perspective and I don't know how many people read content neuroscience with this kind of background gave me all sorts of ideas.

When I discovered AR and VR, and specifically VR, I just found this opportunity to start pursuing some of the ideas that have been floating around my mind, in reading cognitive neuroscience for a while, this started. So the company started about four years ago, and it's been a crazy ride.

But I’m not going to start with what the company is doing.

I’m going to start at the deepest depth that I’ve ever started a presentation. So I believe that for us to be able to successfully discuss these concepts in the end, I need to be very clear about what my background assumptions are. Then, I also believe I need to be clear about how I think those assumptions work or can be implemented.

So, the position part of the presentation. What are my assumptions? I want to propose first what's called ‘The Correspondence Theory of Truth’. This says that there is a reality out there, and its structure is homomorphic to our perceptions. What does this mean? It means that we don't know really what's out there, but we know that there is some correspondence between some sea of particles and radiation and whatever comes to our senses. In the history of human thought, this is a relatively new idea. And in everyday thinking and knowledge and culture, we still don't really take this seriously, as in, we still assume that we're seeing a chair, and the chair is brown, and we look outside the window and we see flowers and there's a certain colour. And that that reality is out there outside of ourselves. And even in reading a lot of the papers that are coming out of the scientific establishment, a lot of it is really not quite taking this proposition to heart that actually there is a huge gap between whatever that reality is and ourselves. And here I want to add a note that, actually, if you read words that are coming out from the computational branches of evolutionary theory, you will see that the correspondence theory of truth has refutations and it has fascinating mathematical refutations. So they're actually people out there who believe that there is no homomorphism between whatever reality would capital R is out there in our perceptions, that we might be completely imagining everything. But I will not go quite to that depth today.

So there's something out there but there's a gap between that thing out there and ourselves, our perceptions.

In practical terms, I like to make sense of this through what's called enaction theory. This was introduced by Varela and a few others in the 60s and 70s. I think in the book called The Embodied Mind [4] was published in 1990. And basically, this starts to deal with the fact that, this mapping between who we are and how we perceive the world in the world is really not tight at all. And it's not just that it's not tight, but we're continuously negotiating what this relationship is. And the reason why embodied cognition and the forum called inactive

cognition is very important is because it triggered a dialogue across science and culture that was about escaping what's called the Cartesian anxiety. So for many centuries, especially European-centric thinking was based on this idea that there is the subject and object, and they are two different things. That we have subjectivism, how things feel, and then there's objectivism, there is the world out there. And there are still a lot of struggles going on in a lot of fields to escape this Cartesian anxiety. It even goes into interesting discussions these days of what is consciousness and qualia and all of that and if we have free will, this is also about free will and all of that. My particular stance is to embrace Varela's inactive cognition and to stay there is no strict separation between who and what we are in the environment. We are defined by the environment and the environment defines us, and our entire organism is about negotiating this relationship. I know this is still a bit unclear, so I will just try to go a bit further into this. Basically, the proposition is that environments are shaped into significance, and these are quotes from the Embodied Mind by Varela. “Shaped into significance and intelligence shifts from being the capacity to solve a problem to the capacity to enter into a shared world of significance.” Or, “Cognition consists in the enactment or bringing forth of a world by a viable history of structural coupling.” So we become structurally coupled with the environment, and both our minds, our organism, and environment are adjusted through this structural coupling. And one interesting example that he gives in the book is of bees and flowers. We don't know if bees evolve the way they are because they are attracted to flowers who offer them nourishment, or the other way around, that flowers evolve beautiful colours because there were these creatures called bees that were attracted to them. Varela proposes that is neither or and that most likely both flowers and bees evolve together, to work together. So there was a common evolution because, from the point of view of the bee, the flower, and the environment, and from the point of view of the flower, the bee, and the environment. So each is both environment and subject from a different kind of perspective. And in that context, they evolved together through this structural coupling.

This also ties back in terms of examples. To focus a little bit on examples now, if you're

in Macy's papers from at the first conferences on cybernetics in the 50s, they were very concerned with research on frogs and I found that very interesting. So why were they so concerned with frogs? Because new research, at the time, showed that frogs cannot see large moving objects that... Actually, they can technically see but their brain just does not process large objects. So a frog is very good at catching small moving things like mosquitoes, but a frog will get run over by a truck. And it's not because the eyes of the frog cannot perceive the truck, is because the brain just doesn't process the truck. Large moving objects are not part of the frog's world. So that was actually very interesting and I think you can easily think of

similarities or start to have questions going through your mind about what things out there, that are very much in the environment and they very much exist, we might even see but just not perceive because they're just not part of how we deal with the world and how we interact with the world, they're outside the structural coupling that we have formed with the environment. And, although, this has been proved when it comes to frogs and many other kinds of organisms, we still have a hard time to imagine that, when we look out the window, there might be things out there which our cognitive system is just ignoring, perhaps, seeing but just ignoring, and I’ll bring up some examples later in this regard.

Another interesting thing is the ongoing research that's coming out about how the human eye is perceiving information. Here it turns out that, according to the latest studies, only about 20% of information that comes through the retina contributes to the image that we see to the image that a visual cortex forms. The other 80% is what's called top- down. So there's just other kinds of information happening in the organism that determines what we think we see out there, outside the window. Again, that number is now 80% and going up. And then, there's so much more out there in research in this sense. There's research that shows that if your hand is holding a cup of hot water, what you perceive from your other senses is different than when your hand is holding a cup of cold water. So just mind-blowing stuff that is just scratching the surface of this. Because we are still shaking off an intellectual culture of dualism, but also of this idea that we see what we see is what's really out there, many people still read about these things and catalogue them as illusions. And my work and my interests are about trying to understand to what is their limit and to what extent are they really illusions. And the more I work on this, and the more I read about this, the more I’m going down the rabbit hole of believing that they're not just illusions, they're probably correct. They're probably what the situation actually is. But why? Why do we think these are illusions? Why don't we perceive these variations? Or why is it so hard for us to even take these things into account? A lot has been written in what's called experimental phenomenology about the Necker cube. That cube that if you focus on it a little bit, it kind of shifts. And sometimes it seems like you're looking at it from the top-down, and sometimes from the bottom up. And again, everyone is cataloguing that as an illusion. It is not an illusion. And none of these things are illusions. But what's happening is, in the words of Merleau-Ponty, a French philosopher, very famous in the school of phenomenology says, “The world is pregnant with meaning.” So, we are born into a social world that fixes our perception to match a certain story. Our society tells us a story, and this story is very catchy. It's so catchy to the point where a lot of work and energy has to go into escaping that story. So our perceptions do not flip on us like the Necker cube. Because we are social animals and

we share a story about what the world is. And what is that story? How powerful is that story? Well, it is that 80%. It is that, at least, 80% that is influencing the way we process the information that comes from the retina, for example.

The other word that I like in this context, also from Merleau-Ponty, is thickness. He says, “The world is also thick with meaning.” So it is very hard for us to cut to this thickness. And because most of the time we cannot, or it takes too much energy, we just buy into this idea that there is a fixed way to interpret information and that is the shared reality that we all live in. And, of course, a huge component of this, that he also goes into in his work is a bunch of norms that dictate not just what you should expect to see when you look outside the window, but also what's the appropriate way of looking out the window, and the appropriate way of behaving, the appropriate way of even thinking about these things, as in, cataloguing them as illusions that come with a certain baggage and so on. Okay. So what can we go deeper into the mechanism that starts to unpack how we interact with the perceptions and how they're fixed and what they're fixed by. And something that I found very striking when I was looking for the first studies and information on this topic, is the work of Lakoff and Johnson. They wrote a very famous book called Metaphors We Live By [5]. They are cognitive neuroscientists interested in or working in the field of linguistics. And you're probably familiar with the work. The Metaphors We Live By was about how language has words like up, down, backwards, downwards, that are used in an abstract sense. And their conclusion was that metaphors are neural phenomena. They recruit sensory-motor interfaces for use in abstract thought. And this was just mind-blowing to me as I read it. I had to read it several times, not because I didn't understand what it meant the first time, but it was just so unbelievable. They're actually proposing that we take things that we learn by walking around in the environment, and then we use those structures to think. So in terms of a mechanism, explaining thoughts and perception I thought this was just absolutely mind- blowing. And there's actually a whole body of research that, both Lakoff and Johnson have done, together and separately, and other people, that are putting meat onto this theory. But again, because it's so unbelievable I feel like we're still struggling to really incorporate this into our intellectual culture. Varela also talks about how we lay down a path in walking. And a lot of people like this phrase, but many use it in a sense that's not literal. But read in the context of Lakoff and Johnson, I think, he might have actually meant it literally. As in, “Our thinking and our walking might not be different things.”

Something that also points at a very interesting mechanism that deals with the

muddiness of perception and thought is an article that came out in 2016, and it's about a very strange phrase called Homuncular Flexibility, the human ability to inhabit non-human

avatars. And again, when this came out I had to read the title a few times because it was just so unbelievable. And it states basically that this thing, called Homuncular Flexibility posits, this theory posits that the homunculus is capable of adapting to novel bodies, in particular bodies that have extra appendages. And that the recent advent of virtual reality technology, which can track physical human motions and display them on avatars, allows for the wholly new human experience of inhabiting distinctly non-human bodies. Ever since I read this, I started my own series of experiments in VR and I have discovered, to my surprise, that is actually extremely easy to, let's say, adapt to non-human bodies, to feel like you're truly embodying all sorts of things. I thought it would take much longer than it actually did. So, with technology like VR, these kinds of things are not even some super theoretical thing that can be achieved in a high-tech lab in some universities somewhere. It's actually in the hands of teenagers right now who are spending more and more hours a day on VR platforms, like VR chat. But I’m digressing a bit from the mechanism. So this is pointing again to a mechanism that is quite fascinating. Even things that we thought were fixed, like our identification with our body and our limbs, might really not be that fixed at all. And again, reading this, Lakoff and Johnson, metaphors that we recruit through sensory-motor interfaces are used in abstract thought, all sorts of things crossed my mind like, “Okay, so I’m inhabiting the octopus for a few hours. What kind of sensory-motor interface has that introduced into my brain and how will my abstract thoughts be changed by the fact that I’ve just spent half a day as an octopus?” Now, Merleau-Ponty and the traditional phenomenology and inactive cognition that I’ve started with, have been talking about things like this since the beginning and they all contain very precise examples of these mechanisms. For example, Merleau-Ponty has a famous story about how a man with a cane is actually using the cane as an extension of his body, because people who use canes, blind people who use canes, report feeling the tip of the cane touching the sidewalk. So they're actually very precise in that description if you read what they say about how they feel the graininess of the asphalt and the pavement. They really feel that they are there at the tip of that cane. So these mechanisms have been known, but I feel like now they are starting to be taken, quote- unquote, a little bit more seriously or their implications are starting to unfold much, much faster before us, because of technology like virtual reality.

And here is something that, for me, it's also a mechanism, but it does not deal directly

with perception, the movement of the body, and thoughts. It deals more with the sense of self. And I know that the sense of self is a very different topic than movement and environment, but it's going to come up later so I want to throw this in here. Foucault, the last book that was published about Foucault's writings is a series of lectures he gave called,

Technologies of The Self. He never finished those lectures. He passed away. But this is what he describes as where he saw his work going, and what he would like to do next. What does he mean by technologies of the self’? He's very interested in what he calls the ‘emergence of a subject’. He's very interested in how people feel like they have a ‘self’ and an ‘I’. How they describe that self and how that self changes. In this context, he's looking a lot at people like Rousseau and how Rousseau not only described the modern subject, but his writings actually contributed to what Foucault calls ‘The creation of the modern subject’. And this is important in the context of us dealing with, or having on our hands a piece of technology that allows people to spend half a day as an octopus. Foucault says for a long time ordinary individuality, the everyday individuality of everybody remained below the threshold of description, and then, people like Rousseau come in and start to describe how it feels to be human, and how it feels to be a subject of the modern state of France and so on. So, from now on, I will refer to this as subjectivity in the sense of, how does it feel to be a human self, a human individual, what could contribute to creating that particular form of how it feels to be you, and what could change how it feels to be you, and under what context does that change? And it's very interesting to me that Foucault himself uses the word technology, although in his writing he's not specifically looking at tech the way we think of technology right now. So just a quick summary, we're like halfway through.

But I want to summarise a bit of what I’ve been trying to, kind of, do so far:

But the establishment of this gap is the one thing that I want you to take away from the first part. I think I’m going to skip through this, but these are some of my favourite articles that I’ve been reading lately. They're all about how the things that we see might not, really, be about what's outside the window. They might be more about our own stories, and our own cognitive processes. It's that 80-plus percent that's about something else. And yet, we're talking about imagery, we're talking about what we think we see.

This paper, in particular, maybe I’m just going to explain to you very quickly what this one is about, it's about this fascinating thing called ‘binocular rivalry’. These terms are, kind

of, interesting sometimes: ‘binocular rivalry or ‘homuncular flexibility. I’m very happy when scientists get so creative with naming these things. So, what is binocular rivalry? Basically, they did this experiment where they got a person in a room, and they showed that person either a face or a house, and then, they put some kind of glasses with a screen on that person, some kind of VR glasses, that flashed for a fraction of a second either a house or a face. And what they found was that the brain decided to, quote-unquote, show the person, or the person then reported that they saw either a house or a face based on one they had seen previously, basically. So the pressing mechanism was like, Okay. I’m seeing a house, and I’m seeing a face. What should I give access to consciousness? Which one would be more relevant for the story of this individual? And the one that was, quote-unquote, shown to consciousness was, of course, the one that related to what the individual was shown at length before these flashes of images.

So in this gap that we have established between reality, human beings, and our perception and thoughts, where and what are the strings, and can tech pull them? I think we have already answered this with things like, the homuncular flexibility and showing that we can inhabit an octopus and almost anything non-humanoid in VR. But I haven't seen any papers yet, maybe because this is just too crazy of a proposition, that takes the next step towards Lakoff and saying, “Okay. How does inhabiting that octopus then change the way you think? Change your thought process?” And, of course, there is no clear answer to that. The waters are very murky. The situation is incredibly complex.

But the fact remains that, tech is starting to interfere with these things. And it's starting to get more and more powerful.

And we are starting to see cognitive processes being altered.

I believe we just don't have a choice but to start daring, proposing things and forming hypothesis, and going into the murky waters of the complexity of this whole thing as long as we want to work in tech. So how does this relate to virtual space? Because at the end of the day I’m an architect. And I’m reading these things, and what goes through my mind is the possibility to test these things by designing spaces.

But before I go into a tentative framework that I’m using now, I want to start with what I call ‘Observations from Field Work’. So I spend a lot of time in VR. We develop a lot of VR applications in the office. I do a lot of events and talks in AltSpace and VR Chat. And I think it's important, before we dive into the theory, to also take into account just what are the stuff that I see out there that seems important. What is the bottom-up side of the work?

The one thing that I find fascinating is what I call the Control+Z effect. This is a series of behaviours that I started to notice in myself, and sometimes in other people as well, that

has to do with things you learn in VR, or in another kind of environment that, then cross over to physical reality and they reflect an inability of the brain to understand or to make a call between, “Okay. What are the rules of this reality that I’m in now and what are my behaviour allowances here versus my behaviour allowances in that other kind of reality?” And I’m calling this Control+Z because I first noticed it many years ago, and it was before VR, but I’m seeing similar things coming out of VR. I want to say when I was an architect, I’m still an architect, but when I used to just do architecture every day without this whole tech stuff, I used to build a lot of cardboard models. But the workflow for my architecture projects was actually just many hours a day in a screen-based software product where I would just model things with the mouse and the keyboard, and then I would also, have in parallel, sometimes a cardboard model running of the same thing, so sometimes I would make decisions in the screen-based software, and sometimes in the cardboard model. And on several occasions, late at night, when I was tired, so my brain was kind of struggling a little bit. While working on the cardboard model and making a mistake, my left hand would immediately make this twitching movement, and my fingers on my left hand would position themselves in the Control+Z position of the keyboard while I was working on a cardboard model. And I would always be kind of surprised, and then, of course, similarly realize what had happened and catch myself in the act and shamefully, a little bit, put my left hand down, “Okay. There is no Control+Z.” But what was happening was, basically, my brain was, kind of, deep into this screen-based computer software where there is a ledger that records all the actions that you do in that environment in time. And you do Control+Z and then you go back one step in that ledger. So my brain had gotten used to the idea that, that environment, quote-unquote, and reality can also go backward. And then, of course, in physical reality the hour of time does not go backward. So that's the first observation.

Then, I’m seeing a lot of emerging phenomena in virtual worlds. I’m seeing people

discover new possibilities for being, for interacting, crazy things happening in VR Chat, if you're not familiar with that platform, I highly recommend it. I think it's by far the most advanced VR interaction you'll see, and worlds being developed, and forms of community building, and community life intermediated by this technology. All of that is happening in VR Chat. And they're years, years, years ahead from any other kind of experience, or game, or anything else that I’ve been seeing. So I’m seeing signs that there are emerging social dynamics and mechanisms for negotiating meaning in these collective groups and interactions that are extremely interesting.

This is also a bit of a topic for another day, but I feel like it's so important that I cannot not mention it. We're slowly but surely not the only intelligent agents anymore. We

interact with bots on Twitter every day and we don't even know that they're bots sometimes. And people are experimenting with introducing all sorts of AI-driven agents into virtual worlds. We have Unreal and Unity putting out their extremely realistic-looking avatars that are AR driven and so on. So we're not really at the point where we go to VR Chat, my favourite platform, and we're not sure that the other person is human or not. But I think, well, I don't know, if we're not already there, we will be there pretty soon. So there's a significant layer of complexity that's being added right now on top of this already complex and messy situation, by the introduction of non-human cognitive systems.

All right, so what is the proposition for what is virtual space? This is how I think about it. A new environment is basically a system you're trying to solve. It's a little bit like a game. So this is the structural coupling of Varela. You go into a game, you go into a new building, you go to a new country to visit, you've just landed at the airport, the first thing you do is, you're trying to figure it out. You're trying to understand where you are and which way you go. Are there any things that are strange? Your brain is turning fast to establish, as soon as possible, this structural coupling with the environment, that gives you control over the environment and understanding.

But I want to argue, in that process, you're not just dealing with this foreign environment, you're actually also encountering the system that is you. You're also dealing, and discovering your own cognitive processes that are engaging with the environment in attempting to couple. So roughly put, designing the environment is designing the subject that interacts with it. So how would an approach to space making look like if we just assumed, in the light of all of this talk about cognitive neuroscience, that the environment and the person are the same thing? That, somehow, they're so tightly connected we cannot disconnect them. It's like the bee and the flower.

If we were to pursue this kind of methodology, what would our tools be? Where would we even start? And I can only tell you how I’ve started doing it. I’m basically doing the best that I can to form hypotheses that have to do with knowledge that I’m taking from these papers, and knowledge that I’m taking from my own experiences and introspection.

One of the mechanisms that I’m very interested in now, and I will show you how we use that in one of our projects is the fact that, unlike other kinds of screen-based software or interfaces, screen-based interfaces that only address or mostly address our visual cortex, VR throws in the ability to control or encourage behaviour that activates the motor cortex. And this is an absolute game-changer because, as a lot of these papers reveal, it is the organism's attempt to integrate sometimes, perhaps, conflicting information that comes from the motor cortex and the visual cortex, that it's one of the most important paths that we have in trying to

understand more complex cognitive paths.

One way is to try to understand this relationship, and then to try to use VR to test things. So what if the eye sees something, and then the body does that, what happens next? Can you always predict what the person there will do? You can if you only show them and make them do what they would see or do in physical reality. But the moment you depart from that, the moment they either see something else and do something they would do in physical reality or the other way around, very interesting things, very quickly start to happen. Now, to what end? I think this is something that will have a different answer for every developer or every company. I think this is, primarily for me, a methodology that I’m only able to pursue and explore using VR and AR. This is not something that's possible for me with traditional forms of architecture. That's primarily the reason why, as an architect, I am in AR and VR and not just in traditional architecture. To what end? For me, the answer is that there are many answers, but one today is that I’m interested in new ways of thinking, and new ways of subjectivity. So that's why I introduced that slide earlier about Foucault and subjectivity. I’M INTERESTED IN NEW FORMS OF BEING HUMAN. And I think that can be pursued through this kind of methodology, but we'll see how things go in AR and VR. I think, new forms of subjectivity can also be pursued through traditional architecture, but there are many reasons why that is a little bit slow.

Okay. And now the last part of the presentation is the fun part. This is where later you can tell me, “Hey Andreea. The things you said, and the things you did, or just the way things turned out do not quite match.” But I would love to hear those kinds of questions.

On the following pages: Implementation

image

This is an older project, but I think it's very relevant in this context, so I decided to start with it. This is a, let's call it art project, it's called Say It. Basically, I designed these different shapes, they're in wax here because I was planning on pouring them in bronze. I never got to pour them in bronze and integrate these RFID tags into them. But basically, this is based on a story from Gulliver’s Travels. Gulliver goes to Lilliput. That's the country with the little people. And he runs into these Lilliputians that cannot speak in words, they speak with objects. They carry on their back a big bag with an object, that's a sample object of all the objects that they need to communicate. So if they want to tell you something about spoons, they will go into their bag and pull out a spoon and show it to you, and then you're supposed to like, quote-unquote, read that they mean to say spoon. So this intersection between language and objects, or objects as language, and then, the many complications that result when trying to use objects as language, because you don't have syntax, was something I became very interested in. So what is the syntax if you just have the objects? How does that arise? So, the idea with this project was to have two people and then give them a bag of these objects, and these are somewhere in between letters and objects. And to design ways in which this could maybe give some sort of feedback. But to observe how fast, or to what extent, or in what direction people start to use these to communicate. The people are not allowed to talk to each other, of course, so they're given something they're meant to communicate to each other and only have these objects. And then, they're given an hour to try to use these things to communicate, and basically, they have to negotiate meaning for these abstract shapes.

image

This is an AR game that we have developed for a museum. And here we used one of these approaches that I mentioned earlier. We hypothesise a certain reaction that would happen if we present the visual cortex with conflicting information from what the motor cortex is reporting to the central nervous system. And it worked. We were able to trick people into believing that their body is floating upward. About 20 meters. So we basically trigger the mild out-of-body experience. This is mild, it's something quite nice, it's a game that happens outdoors, it's triggered by GPS coordinates and you're basically exploring a story of the German [indistinct] in the south of Germany. It's very integrated with a story. It's a very mild thing. It's not scary at all. But we were surprised ourselves that we were able to use some of these theories to make something like this that actually, quote-unquote, works.

image

This is a three-dimensional menu. What you're looking at here is, basically, a folder with files. It's something that, from the technical knowledge that we have today, it's something very basic. Something a programming student will understand everything about in the first hour. But we wanted to see how we can take a folder with files and make that a three- dimensional experience. So we went very literal about it. We used what is called the metaphor approach to UI, UX, and interfaces, but with a bit of a twist. So you are in an elevator where you can go up and down to infinity. And in each one of these TV slots, you can save one of your files, that you produced in this application that we're working on. You can save it in here, and you can then rearrange them, because we're working on putting smart tags on them. So it's kind of like creating a map, but then, you can reorganise them so that they form a different kind of map. And what's even more interesting it's, we also tested another thing. You can go in, on this chair, and pull a file out of this slot next to this strange TV screen and throw it down into the abyss. It's like a big VHS tape that you kick outside of this chair and you can look down and see it drop. We're very interested in understanding how people react when they have to interact with abstract things like files as if they were physical objects they can throw. And this is part of a much more complex exploration that we're pursuing. This is part of the same application.

image

This is the kind of environment you can make that you then save on the screen. And the one thing that I want to point out here is that, you basically see the scene two times. What you're seeing here is that, you are in this roof that's shown to you at one to one scale, and you also have a mini version of that roof. So you're simultaneously perceiving, quote-unquote, this fake reality inside of your headset two times. And we're experimenting with all sorts of interactions in here, because you also exist in here two times. You exist at your perceived one-to-one scale. And what we call “mini-me” is also in here. So there's mini you in there that you can also interact with. So we're seeing very interesting things happening because, of course, this environment, where everything is twice and there's a mini you that you can do things to, it's a very different logic of the universe than what we are used to having in physical reality.

image

This is a Borgesian Infinite Library based on a Penrose tile pattern. We made this kind of for fun to explore the limit like the psychological limits of environments. This is actually a VR environment, but it's a bit much so when you go in, your mind starts to lose it a little bit. But we just wanted to make an environment where we observe, at what point is an environment too much, and what exactly are the psychological effects that you start to experience in the first person when that environment becomes too much. And why is it too much? Is it the repetition? Is it the modularity? What exactly makes triggers those psychological effects?

image

And this is my last slide. This is a game that we're working on, also highly experimental, where we're putting a lot of these things that we're thinking, and reading about, and exploring. We're collecting all of this into what we call a VR testing environment that is called GravityX. And the motto for this is, the first line from John, but with a bit of a change. So it goes, “In the beginning there was space, and the space was with God, and the space was God.” So we basically replaced the word, “Word” with space in the first line from John. All right that was it. Thank you for bearing with me through this.

Q&A

https://youtu.be/4YO-iCUHdog?t=3864

Frode Hegland: It was an absolute pleasure. Very, very grateful. I mean, obviously, lots of questions and dialogue now, and amazing. My initial observations, kind of, to you and to the group. First of all, thank you. And secondly, I was asked a while ago about, “Do I think the future is going up, improving? Or going worse?” And my answer was, “It seems to be diverging. Getting much better and much worse”. You're in Germany now, right? So we're dealing with a full-on war in Europe. We're dealing with horrible things in other parts of the world. And then, we have this. When I defended my viva to Claus and Nick about two weeks ago, they very rightly questioned some of my language use around mental capacities. And my defence to them was, “We just don't know enough to use hard language”. So Claus, if you don't mind taking the first half of this presentation mentally into my thesis, that would be great. What I’m trying to say with that is, if our species is to survive, we have to evolve. And we're the only species known who has a chance to have a say in our own evolution. So I think that what you have shown today is foundationally important. It was just really beautiful. We have to take this very seriously. In our group here, we call ourselves the Future of Text Lab. But we have decided that what we mean by Text is almost anything. It used to be very narrow, but because of VR, we're doing something else. And just two more comments before I open up the virtual floor here. One of them is: I believe that the most powerful thing human beings have is imagination. And imagination has an enemy, truth. A teacher, when I was in university, many years ago said, “Truth kills creativity. Because when something is something, it is something and you're not going to look at it in a different way”. We saw that with the normal, traditional desktop computing, it basically became word processing, email, web, and a few other things. A lot of the early stuff isn't there. When we today, in our community, try to make more powerful things, people say, “Huh. But that's not a word processor”. Or, “Uh. That's not that”. Because imagination has been killed by truth. It is something. A little thing that I read on New Scientist, I think two days ago, in our bodies we have this thing called fascia, which is a connective tissue that goes around all our organs. I’m mentioning it for two reasons. First of all: it is kind of like an internet for our body that's not our central nervous system. But until 2019 it was just thrown away. If you're doing a dissection, or if you're cooking a beef dinner, you would just get rid of this stuff. Because we didn't have the ability to investigate it. And again, 2019, nobody had looked at it before. And now we're realising that it has about as many nerve cells, roughly 250 million, as our skin.

When you are looking at the way that our brain connects with the world, what I really liked about the way you do it, you are clearly very intelligent, but you're also very humble. Clearly we have evolved with our environment, but the implications of what that means is extremely hard for us, humans, to fathom, I think. So, I just wanted to thank you very, very much for having the guts to look at this most foundational thing of what is to be human. And for us to together try to use virtual reality type things to examine how that may change.

Andreea Ion Cojocaru: Yes, thank you so much for saying that. Well, I think I have the guts to talk about these things because I’m an architect.

Bob Horn: I’m so excited by this presentation. It's just so delightful. George Lakoff was a friend of mine and colleague. I audited his course over in Berkeley. I wrote the obituary for Varela, for the World Academy of Art and Science. The whole framework in which you enmeshed us in now is wonderful, and it really excites me now to get into virtual reality. I’m among the older people here in this group and I’ve resisted. Gulliver’s Travels metaphor was wonderful. I have a collection, one of the things I do is put words and images together. Visual spaces. As you can see behind me. Mostly I do it into two-dimensional murals that are 12 feet long and so forth. I actually work with the International Task Forces on this. The one behind me is the one I did on the avian flu 15 years ago. On what could have been the worst pandemic. And so, anyway in looking into into just the Gulliver thing. I mean, that I want to get off my mind. I had forgotten all about this bag of stuff. I have a bag of objects which are arrows. Which I use in these murals. I have a bag of 200 arrows. Different kinds of arrows, that have different kinds of meanings, that I would like to throw out there and give to you and see what you do with them, and see what you do with them in in virtual reality. So, anyway, I’m just filled with exciting possibilities after this. I don't want to occupy any more time, but thank you very much. It was wonderful.

https://youtu.be/4YO-iCUHdog?t=4317

Brandel Zachernuk: Thank you. This is super exciting. And your comment on the, sort of, the homuncular flexibility and, sort of, hinting at neuroplasticity is something that I’ve definitely observed in my work. I was one of the responsible for some of the launch titles for Leap Motion. One of the things that were really fascinating for me there was having the number of degrees of freedom that one has there, and being able to just turn those things into whatever you wanted. And after a while, the contortions that one's hands were undertaking,

completely disappeared. And the more simple of which was just tilting a hand, but then, amplifying that three to four times. Most people didn't realise that this angle wasn't that angle. They completely thought that their hand was down, despite the fact that that would have been anatomically impossible. So I think that we have an enormous range of opportunities available to us once we have the ability to, kind of, recruit more of our stuff. One of the first things that I wanted to talk about, or ask you about is; You were pretty disparaging of the term "Illusion," which I’m in agreement with. It reminds me a lot of Gerard [indistinct]’s frustration with people talking about cognitive bias and the sort of embodied situated cognition kind of things you're talking about also, prioritise cognition for a reason. So have you come across or what is your take on cognitive bias and how it relates to this, as well?

Andreea Ion Cojocaru: Well, most of the things I’ve encountered that were referred to as cognitive bias, where bias, with respect to some kind of main understanding of cognition, but we do not agree on what the main understanding of cognition is. So I don't know from what point of view do you think that that particular thing is biased. So I don't find those conversations particularly useful, or the term itself, from the perspective of my interests. Because I don't think we have that common ground or understanding that would allow us to meaningfully talk about bias.

Ken Perlin: Everything you're saying is absolutely wonderful and resonates very strongly. And it also, in support of this, I’m thinking that there's this phenomenon that, when something becomes normal, we tend to forget that there was a time when it wasn't normal. So everyone here has had the experience of an automobile being an extension of our body. And we all read a book, which is an object that kind of didn't exist at some point. Even the fact that we wear shoes now when there was a time when people didn't wear shoes, the whole world would have seemed very strange. And obviously, phones and all these things. So it seems to me what you're talking about is kind of the next phase, or actually putting some rigour behind, a phenomenon that is because we are the creatures of language, so, therefore, we live in this world where I say the word ‘elephant’, you've got an elephant in your head. And that happened a hundred thousand years ago. We're kind of catching up in some sense to understanding what we do as a species. And I think I agree with you completely that, because of the more radical vestibular nature of, “I put on a VR headset, and now I start having these new kinds of novel mappings”. But, on the other hand, the language of cinema is something that might not have made any sense to someone before we all learned how to watch movies, and that's a completely crazy mapping, if you were not used to it, that radical point of view

changes from moment to moment, but yet doesn't drive us crazy. So I feel like, not only is what you're saying make a tremendous amount of sense, but it's also making sense of things that happened long before we even had computers. And that's kind of what we do in a way, we just didn't kind of acknowledge it yet. And I wonder, what do you think about that?

Andreea Ion Cojocaru: I think we're social creatures. So sharing a reality is how we survive. It's the kind of organism that we are. So it's important that we can share a reality, and the reality that we share cannot be the actual reality. It's just not. So we share a story about that reality. And it takes society to change the story. Individual people cannot change the story at a level that's profound or meaningful enough at all. There are these lonely people that sometimes can become important, and we call them innovators, when everything is good we call them a pain in the ass. I think now is a particularly difficult time in which we happen to need innovators. I think now things are not looking good at all in terms of where society is going and what we're doing to the planet. So I think there's a particular urgency to call the people that can shake up the story. That's also a bit the reason why I introduced the talk about subjectivity. I believe that there are two reasons why I go into these things with VR. One is because I personally believe this is a path and a methodology that gives us the most ability to understand what the technology can do. But I also think the promise of a change, in the subjectivity of a change in the story, collective story, of a change in how it feels to be human is appealing to me, because we are, at a point, where we really need that right now and we can't afford to wait. So there are two slightly different reasons why I chose to kind of go down this path. And, yes. I think all of this has happened in the past. I think the collective story controls the narrative of everything. That's why, for me, the moment VR will reach mass market is actually very important, because, right now, we're still talking about this technology being at the fringes. We have what? Half a million people? A million people in VR Chat? But I think the numbers are much less in terms of concurrent users. But where are we taking things if half of our teenagers start spending half a day as an Octopus, how do we make sense of that, and how do we take this tech to a point where we... It's like, I think that if we continue to avoid a serious discussion on these mechanisms and methodology for XR developers, we will fail to have a good grasp on this technology. It's a hard conversation because a lot of people, as I said, either believe that these things are illusions or do not think is part of their discipline to go into this discussion. My position is, you just don't have a choice. We just have to go this path. Or at least have a conversation and debate methodologies. Because we will be in a situation where, on one hand the whole planet is going down the drain, and on the other hand we have to put half of our teenagers in some

mental institution because they spend their days as an octopus. So this is putting it extremely bluntly. I should mince my words, but sometimes I get this sense of urgency coming from these two directions. And the best I can do, with my ability to think through things, is to go as deep as I did today and try to ask these difficult, unanswerable questions, to try to prevent, perhaps, or contribute to the prevention of these two big dangers that I’m seeing.

Ken Perlin: Thank you, yeah. It will come a day when the people who get put into institutions are the ones who refuse to learn how to be an octopus.

Mark Anderson: I love this. Interesting enough, actually, it was interesting the bit about homunculus, because that actually, my understanding sort of came at a completely different angle, because I came across it in V. S. Ramachandran's book, Phantoms in The Brain, back in the late 80s, where this was about to do with neurological people with damage and how they were adapting their bodies. But, of course, it's blindingly stating the obvious, to me it says that this would map across, why would it not? Because just if you can wrap your mind around mapping your mind away from a limb you no longer have, putting a couple of extra octopus arms on isn't such a big stretch. I just come back to a couple of things that it's interesting to sort of getting your thoughts on a bit more. I was listening to your thing about the Command+Z and I was just wondering, it was hard to phrase this in a way that doesn't sound glass half empty, which isn't where I come from, but so when we bring these things back, I suppose the answer is we don't know whether we bring back good things or bad things because, in a sense, we can train ourselves to do things we do normally for not particularly societally good reasons. We train people to do things very well. And then we have problems teaching them to not do that. So I’m wondering if there's another interesting element in this as we explore it. On the one hand, potentially the gain, even the things, going back to my opening point about the neuroscience people at San Diego trying to mend broken bodies and things. But just being able to effectively work through a different set of control mechanisms is really interesting. So I don't know if you have any thoughts on that. And the other thing that I was interested in, when you mentioned sort of the 80/20 thing back you were also saying effectively we're not using, or we don't know how we're using 80% of our neurological inputs. Is it that we don't know what it's doing or we just think it's not being used?

Andreea Ion Cojocaru: Yeah. Oh, I can clarify that. The first example of this that I’ve

looked at, is actually Varela's own research. He was studying vision. And he talks about this in The Embodied Mind in 1990. He talks about how, basically, 20%... So the information is entering through the retina, the optical nerve. And the visual cortex is forming the image. So

that's what our consciousness perceives as it's out the window. And Varela concluded from his own studies on vision that only 20% of the information that's coming through the optical nerve is used by the visual cortex. And there's very recent research, a few months ago, that is reinforcing that about various parts of the brain. So 20% is like, quote-unquote, actual. But actually, the thing is, the percentage, in the beginning Varela was not really believed, and there was a lot of pushback on that. They were like, “There's no way this is true”. I've recently listened to a podcast by a neuroscientist saying amazing, completely shocking things are coming out of research right now showing that 80% or more is what's called top-down influences. And she sounded completely like, “Well. But this is science, so we must believe it. But we still can't really, or really want to believe it. And it looks like there could be more than 80%”. And she was kind of shaking. Her voice was shaking as she was saying that. And I was like, Well, Varela said this 30 years ago. So there's some degree of homomorphism, but again, if you listen to other people, there's no homomorphism, there's some degree of homomorphism between the environment. It is that 20% or less, the rest we're making it up. We're making it up. But it's a collective making it up.

Peter Wasilko: I was wondering if you had any thoughts about the use of forced perspective and other optical illusions in real-world architecture in order to create a more immersive environment?

Andreea Ion Cojocaru: I think, in the physical world, we are experimenting with AR in creating illusions. I don't know if that's what you mean. So my example of the AR app where we create this out of body experience was a little bit like that. But for me, it's very much connected with what are we trying to achieve. And for our work, it's not immersion. I’m not very interested in immersion for its own sake. It's like, what does that mean? Does it mean you really believe that you're in VR? I don't know if that's so relevant for my interest. We create illusions but only because we want to achieve a certain feeling, or emotion, or cognitive process, or trigger a certain thought process. So the illusion has to be connected to that by itself just being in an environment, and thinking it's another kind of environment, or if thinking, or having the illusion that is bigger, or smaller, or just different on its own, without part of the largest strategy, is not something that we would typically pursue. I don't know if this answer your question.

Peter Wasilko: Yeah, pretty much. I was thinking of trying to design environments to achieve certain emotional cognitive effects. So I think we're running in the same direction.

Claus Atzenbeck: Yeah. First of all, thanks for this talk. I have three quick questions, I

guess. So you showed one project. It was this elevator, basically, which you can use to go to some TV screens. Can you say a little bit about the limitations we may face in a virtual 3D world? For example, if I imagine that I have some zooming factors implemented that the user could zoom in to up to infinity, basically. This would change the perception of the room. So I would become smaller, and smaller, and smaller, and the space would just become bigger and bigger so I could, actually, have different angles. So is this something the human could still work with? Or for example, what about rooms which are of contradicting dimensions? I imagine this Harry Potter tent, for example, which is larger inside than outside. Is this something a human can actually deal with? Could a human, actually, create a mental model of, since this cannot happen in the real world? This was the first question.

The second one is a general question about vision space, VR, I mean, this is all about visuals. This is just one channel, basically, we look at. Did you think about, well, first of all, why did you pick that and not other channels which would target other senses? What do you think about multi-modality, for example? Using different senses? And also, what would be the potential, basically? When you said this Control+Z thing, I thought about the muscle memory I have for typing a password, for example. When I actually look at the keyboard, it becomes harder for me to type in the password. And if I see a keyboard which has a slightly different layout, possibly two keys would be exchanged, like the German keyboard and the U.S., American keyboard, it becomes almost impossible to type this password fast enough, because I’m kind of disturbed by the visuals. So wouldn't it make sense to actually ignore the visuals for some projects, at least, just thinking about the other senses, basically?

And the last question is more of a general nature. Do you think it's really beneficial to try to mimic the real world within the computer? Like a 3D world which almost feels like being in the real world? Or do you think we should focus on more abstract information systems which may be more efficient, for example, than using an elevator going up and down?

Andreea Ion Cojocaru: Yeah, thank you for that. I think one and three are connected. One and three are about the elevator. The first question was; could it be too much for us to deal with these infinite spaces and this shrinking and expansion of our perception of the body because it's so drastically different? Up to a point, we can definitely do it. Just like the octopus. I do think we can do it. We will hit boundaries and borders, and I’m fascinated by that. So part of our more experimental work is to see where those boundaries are, and what does that mean. Because, yes, we have adapted for quite a while through the physical Reality with, capital R, whatever cloud of particles and radiation that is for quite a while, right? But if the people that do not believe in homomorphism are right, and mathematically

so far they look like they're right, we actually have no structural coupling with what is out there. We completely make up the collective reality. But again, I’m going into speculation. Since I’m like not a scientist, I try not to speculate in public. And when I speak in public, I just focus on the papers and keep the speculation to my interpretation of the papers. Going in this direction would mean going into papers that are not commonly accepted as science. So it's a big parenthesis. I believe, assuming we have homomorphism structural coupling with Reality with capital R, I think we will hit boundaries. I think VR can quickly put us in environments that we can't deal with and will feel uncomfortable. I’m interested in exploring that boundary and have... I don't want to go beyond boundaries, I have no interest in making anyone feel uncomfortable. But I feel like we don't really know what the boundary is. So we're talking about what we think the boundary might be, without actually having a good understanding of where that is.

Then, the third question was related to the chair. So I would argue that that chair is like nothing you would ever experience in reality. We're taking something that is a little bit familiar to you, which is a chair and a joystick that moves the chair up and down, but the experience and the situation are drastically different than anything you would do in reality. Because you cannot take a chair to infinity in reality. So what we were doing in that environment, people say skeuomorphic, I’m like, "What is skeuomorphic about driving a chair to infinity?" So what we were doing is, we had some variables, some things that were controlled. We couldn't have variables everywhere. We couldn't have variables on the infinite wall, and variables on the chair and what's around you, because it would have been too much. So we made the chair and the control skeuomorphic, quote-unquote, so we can experiment with the other stuff. And the fascinating thing was that, basically, that environment is just a folder with files. But just by doing this, it's stupid, the whole thing is on the infinite elevator, and the infinite wall, on a basic level is the dumbest thing, but all of a sudden, people started to get exactly the same ideas that you just got with like, "Oh. What if I go to infinity? What if I start to have the feeling that I’m shrinking or expanding?" And you do. You do start to feel like you're shrinking and expanding and you're losing your mind. People started to think, “Oh. I could have infinite scenes”. This is like, they started to ask us, “Is this the metaverse? Oh, my God! The possibilities of seeing all of my files in here”. And people got excited about something that they already have. They already have that in a folder. You could almost have, well, not infinite, but you could have more files than you would ever want in a folder running on a PC. But their minds were not going, and exploring, and feeling excitement about those possibilities. So it was interesting how, just by changing the format, like spatialising something you already have, just open up this completely different perspective. So, yeah. We

call that our most spatial menu yet, because that's basically a menu. I think there is tremendous potential in this very simple, almost dumb, shift from screen-based 2D interfaces to 3D. It's dumb but for some reason no one is doing it. For some reason like, I posted this stupid elevator and some people were like, "Andreea, this is stupid. What the hell is this? Why are you doing skeuomorphism?" Because I’m known for these ideas, and known for hating skeuomorphism. And everyone saw my elevator was skeuomorphism and I’m like, “No, no, no. That's really not what we're doing”. And every single VR application out there opens a 2D menu on your controller and you push buttons. And it has like 2D information. So they're still browsing files and information in VR on a little 2D screen. So this elevator was our attempt to put out there a truly spatial file browser. And the extent to which it triggered this change in perspective over who you are, what do these files represent, who you are in relationship to them, what is the possibility, was really striking. We didn't really expect that. We almost did it as a joke. We were almost like, “Why don't we model this like 60s soviet-looking elevator and then, have an infinite wall and see what happens”. The idea with the infinite wall also came from like, I have a few pet peeves:

One is like, homomorphic avatars, which I hate.

The other one is the infinite horizontal plane that all the VR applications have. Why in the world do we have this infinite horizontal plane in VR?

So we wanted to make an infinite vertical plane in VR. Muscle memory, yes. So the reason why we're focusing on visuals is because that's what we've been focusing on. But in the game that I mentioned, we have an entire part of the game which is called, The Dark Level. So what we're doing in the dark level is exactly what you said, which is we're exploring sound and space. You don't see anything. So basically, the VR headset is just something to cover your eyes and to get sound into your ears. That's something brand new that we're embarking in, because I agree with you, everything that I talk about is not necessarily specific to visuals, it just happens that we're just now starting to do space and sound, as opposed to space and visuals.

Claus Atzenbeck: Just one more question on what you just said. Do you think this infinity virtual 3D environment is something that people like because it's something new but you're not solving a particular problem? Because I can imagine that we have a plain zoomable user interface like Jef Raskin did something like that, which you can zoom in and check your files on an infinite 2D space on canvas, basically, on the screen. So it's just because it's something new and people are happy to use that because it's new? So it's like a game? That's gamification, basically?

Andreea Ion Cojocaru: There are two things we're pursuing with that.

One is spatial memory as opposed to semantic memory. There are studies that show that spatial memory is more efficient than semantic memory. In other words, you're more likely to remember where you put something than how you named it. So we're interested in where people put things. And we don't want people to put something somewhere, this object that is their file, with the mouse. We want people to physically move their bodies to put that something there. So we're taking the file, which is an abstract thing, we're embodying it into an object in VR, and we're making people, literally, take it with this forklift, because we're just being stupid right now, with this forklift and literally putting it somewhere else. So that kind of testing of spatial versus semantic memory, I think, can only be done in this context. And I don't know of any other project that's doing it.

And the second thing is, yeah, just this pure idea of interacting with abstract entities as if they were embodied objects, and being able to apply physical movements of the body, and moving the body through space to interact with these abstract objects. So that's kind of clashing together Lakoff with all of these other theories. It's like, you're learning how to manipulate abstract thoughts, by learning mechanisms from how the body moves to space but in a perverted kind of way, VR allows us to smash the two together.

So we are, and we are just observing how it happens. So, no. At a conceptual level, we would love for people to have fun, but it is these two things that we are interested in learning more about. We have not just made it so people think it's just cool to go up and down.

Frode Hegland: I’m going to go all the way back to that 80% stuff. That, of course, in a very real sense doesn't mean anything. I’m sitting outside now and there are our trees, and birds, and everything. And we have to talk, of course, about affordances. What these things are to me, which is interesting. I can see that there's grass over there. There's no chance and no usefulness for me to know exactly how many blades of grass, exactly what angle they are, exactly what colour level they are, etc. That is not useful information for me. So obviously, the 80% stuff is all about where in our system, information gets filtered. And how it's used. There are, of course, different levels of this, and the reason I wanted to discuss this point is, in the physical world, if there is a fox or something that may come gnarling up at me, then a certain type of shadow has information that otherwise wouldn't have information for me. And it'll be very interesting to see when we start designing our environments in virtual reality, how we can choose to, more intelligently say, “This stuff is meant to be here because if it wasn't here, you would wonder why it's missing”. Like a wall. You know you don't need a wall in VR. But otherwise, it would feel unbounded, literally. And here's another

piece of information about this wall, which has actual meaning to you. So I’m wondering if you have any reflections on, let's call it hyper surrealist worlds, where you look out the window and you can choose to see the weather tomorrow. Some of it's kind of real and fancy, some of it is just completely insane. But that thing where some information is meant to be there, otherwise, you'd miss it. Other information has actual meaning. Thank you.

Andreea Ion Cojocaru: Yeah, thank you for this question. I’m going to say some things now that I allow myself to say in public because I am an architect and not a cognitive scientist, so I’m not going to risk my reputation. But the reason why the 80% is meaningful to me is, because it means the 80% can be changed. The 80% is the story. So, again, this is kind of very out there statement, but I’m more interested in figuring out, rather than changing the environment and designing super interesting environments, and putting people in there. I’m very interested in pursuing what these research studies are implying and seeing to what extent the story can change what you see. Because the “over 80%” is the story, so if we change the story, you will not see grass anymore. Just like the way the frog cannot see a truck. Again I don't mean this quite so literally, but on the other hand, I do. On the other hand is the study that shows that if you're holding a glass with hot water, you hear different things than when you hold the glass with cold water. So the evidence is on the wall, but we are really scared of going into the implications of this. And the cognitive scientists do not risk their reputation. Some do and talk about things, but they're not exactly considered mainstream. So it is there. I mean, the study is there.

Frode Hegland: Oh, yeah. And I think that's phenomenally useful, but another half of this is the issue of... I had a friend who was obsessed with cars. He would know everything. So we'd be walking down the road and he would see, at night, a taillight from behind, at an angle, and he could tell me who designs the wheels of that car. So what he saw, what was information to him, was very different from what it is for me. And looking at my son, first time I’m bringing him up today, so I need a medal. Anyway, if he has touched grass, for instance, of a certain thing, when he sees the grass, he doesn't just see lines of green. We obviously feel something with it. So along with what you're talking about, I look forward to being able to put visual information that can have rich meaning for us, but in entirely new ways or something, the two literal examples. That's all, and thank you very much for your answer.

Brandel Zachernuk: Yeah. So you mentioned a neuroscientist. Was that Lisa Feldman Barrett f? Because if not then I’d love to know another one. Yes? Okay, good. Yeah, she's amazing in terms of her exposure to the way that priors are so important, in terms of what

we're perceiving. So I’m glad we're on the same page there.

Andreea Ion Cojocaru: Yes. She was recently on my Mindscape Podcast with Sean Carroll g , yeah.

Brandel Zachernuk: So that, specifically, was on Mindscape? Okay, great. Thank you. And then, the next thing I wanted to talk about was, so I’m really glad to hear about your disinterest, potentially, and antipathy for immersiveness, for its own sake, because I share that. People who are regulars to this meeting know my hostility to the notion of story for its own sake as well. But you've also brought up being an octopus. So it strikes me that you would probably not consider being an octopus to be, sort of, significant in and of itself. But for some kind of functional practical benefit, some cognitive change that you would expect to occur. Have you played with Octopus? And what kinds of things have you observed there? Are there any signs that you do different things there as a consequence?

Andreea Ion Cojocaru: Yeah. So I use their methods. Giuseppe Riva is a researcher from Italy who is using VR and these theories of embodiment to treat our sort of mental conditions. And he has an onboarding protocol for helping people identify with an avatar. He's using it with hominoid avatars. But I’ve used that onboarding protocol, again, on myself, these are not things I make public or ever will, but on myself. You basically tap, you use the thing from the rubber hand illusion. You have someone tap your actual body, and then, you program something that will tap your other body in a place that's kind of in the same place. And then, I did an experiment to see the extent to which I can embody other kinds of stuff. So this tapping helps quite a lot to go into it fast. And I like to embody spaces.

And this sounds nuts, but let's talk about it. I like to embody a room. I like to experiment with how big I can get. And again, this is completely crazy talk, but then here we are, in 2022, with VR in the hands of teenagers. So, yeah. It happens. I mean, it's real. How fast it happens and how profound that experience is will vary from person to person. It's kind of like, some people have lucid dreams, some people can trigger out-of-body experiences and some cannot. But the mechanism is there. And the technology now is there and costs 400 bucks. Why do I do it? I’m interested in observing how I change. I’m interested in observing myself, and most particularly how I perceive physical reality afterward. So I’m trying to understand this transfer and see if I can have any kind of insight into that, then, I can phrase it in a more methodological way and start to form hypotheses. There are changes that are happening in me. I’m not at a point yet where I can talk about them with enough clarity to communicate them to other people, but they exist.

And at the end of the day, I’m interested in what Foucault called, ‘Technologies of Self’. Because what I’m doing to myself is, I’m making myself the subject of technology of

self, I’m using VR. But you can use other things that are not technically technology or not technology in the modern sense, you can use books or other kinds of things to push a change in myself that is very new.

And I need to understand what I’m becoming. What's the possible direction of that?

Because we might potentially face this happening on a global scale soon with very young people. And because scientists are so scared to talk publicly about this, they're so scared to throw things out there, because the VR developers are so scared to really go into this, we are left in a bad place right now, where we know we struggle. And I mean, I get a lot of shit for talking about these things. There's a lot of people telling me on Twitter that I’m wrong but I do think it's necessary, so I do it.

I’m interested in how these things will change us, and what's the potential in that as well. I think it's even harmful to try to avoid it. So those developers working hard not to trigger these things are harming everyone. The tech will do that anyway, so we might as well understand it and let it happen, or at least control how it happens. But we can't if we don't look at the mechanism. And I think that when these developers are talking about what they do to avoid it, they are not talking about the mechanism. They're not even trying. They're not hypothesising any mechanism that triggers them. They're kind of like band-aids, right?

They're kind of like seeing something happening there and then they think it's something and trying to have local solutions for that. I don't know, did that answer your question?

Brandel Zachernuk: Yes, absolutely. And your point about being a building I think is really thrilling. Reminds me of some stuff that Terry Pratchett, in Discworld, was a remarkably neuroplastic kind of writer. But it also reminded me of, when we were talking about the channels of information that we're using to, sort of, explore and mess with, that proprioception is completely distinct from visual. And to that end, the most exciting thing for me is virtual reality's capacity to impact what it is that we mean to do with our bodies, and what kind of impact that has. So it's very exciting to hear all of these things put together.

Thank you.

Peter Wasilko: I was wondering if you'd ever read Michael Benedict's 1991 book, Cyberspace: First Steps [6]?

Andreea Ion Cojocaru: I did not no. Should I?

Peter Wasilko: Yes, you should. It has very interesting presentations of abstract information spaces. And one of the ideas was, to have higher dimensional space represented as multiple three-dimensional spaces that can unfold to reveal nested subspaces inside. Sort of like, you're looking at three walls of the cube, then another sub-cube could open based upon a point that was selected within the first cube representing another three dimensions of

the abstract information object. Also it introduced the idea that you could be representing a physical object in a space, but the space itself could represent a query into higher dimensional space. So the point in the space would represent the query corresponding to the three dimensions that were currently displayed in the one space, and that would then, control what was being displayed in another link space. So just the most fascinating thing I’ve read in a long time. And I keep coming back to that book and encouraging everyone in our group to take a look at it. So I highly recommend it. And when you do get a chance to read, I’d be extremely interested in what your reaction is to those chapters.

Andreea Ion Cojocaru: I want to add something quickly. So the thing that crosses my mind, which again, it's not something I just say in public, but like, why not? Because today's discussion is already going interesting places. What crossed my mind, as you describe the book which I will absolutely read, is this: so let's say, I just said that I, sometimes, like to embody an entire room. We can't understand these complex spaces and nested spaces on four- dimensional spaces and so on. But can we, if we are a room? What kind of perceptual possibilities and cognitive possibilities would that open up? Because, of course, if you truly believe that you are the room, your brain is in an altered state of consciousness, basically. Not in the like spiritual sense in any way, but at the cognitive of the cognitive level. So again, this is kind of wild speculation. But that's just the thought that crossed my mind.

Andy Campbell

Dreaming Methods - Creating Immersive Literary Experiences

Dreaming Methods has “always been at pains not to place text in front of images, or beneath them or to one side, like labels on tanks at the zoo or explanatory plaques next to pictures in a gallery… we explore to read. This avoids the danger of us regarding the texts as more important than the imagery. It pulls us in, and it makes [the] work inherently immersive and interactive.” – Furtherfield

image

Figure 3. Campbell, 2022.

How can text – when it changes from ‘static’ to ‘liquid’ in digital environments – become as absorbing and comprehensible to readers as traditional text? And what sort of effect can it have?

Since 1999 Dreaming Methods has developed challenging and compelling works of digital fiction that blend text with immersive sound/visuals and explorative gameplay. These works often include experimental narratives-in-motion (animated, fragmentary, and multi- layered texts) which require different methods of both writing and reading.

This short talk explains how our approach has evolved whilst maintaining a clear artistic vision: from early browser-based technologies such as Flash to ambitious narrative

games and VR experiences. We offer some fascinating insights through several real-world examples from our portfolio, including a virtual reality mobile library van/space shuttle designed to encourage children’s literacy and a spoken-word VR poetry experience currently shortlisted for the London Film Festival XR Prize that tells the stories of three Northern women.

Video of presentation: https://vimeo.com/onetoonedevelopment/review/ 753519382/02550aa9bf

Presentation (pre-recorded for the Symposium)

Dreaming Methods is a creative studio that develops immersive stories with a particular focus on writing and literature. We’ve been producing digital fiction for over 25 years.

Much of Dreaming Methods’ early work was dark in tone and highly experimental. A mix of surreal dreams and urban horror, it was published online, mainly through Adobe Flash to shift away from the then quite tight constraints of HTML. My approach was to treat text as a visual and fluid entity, to challenge the reader to the extreme, to make the structure of the stories themselves something unreliable, unstable.

We use a lot of the techniques that we originally developed with Flash to inform our current approach to digital fiction – especially when working in VR.

WALLPAPER for example, part of a research project with Professor Alice Bell from Sheffield Hallam University called Reading Digital Fiction, is multi-layered in its approach to text. It’s an atmospheric and tense narrative with some surprising twists.

The text within WALLPAPER appears on physical items within the gameworld, such as on postcards and letters to give a sense of grounding and normality, but it also has a ghostly presence: hand-written, circular, and floating like the cobwebs of memories; and as a flowing underlying texture that exists just beneath the environment’s surface.

In The Water Cave, an explorable VR poem about depression, a single thread of glowing text acts as an umbilical cord through the entire experience, guiding the reader/ player out of the depths of the cave towards the surface, even though at times, ‘clinging to the words’ means having to submerge beneath the water.

Digital Fiction Curios, which we created as part of another research project with Professor Alice Bell, is a prototype digital archive for VR that uniquely houses a selection of our old poems and stories created in Flash – a response to Flash being made redundant in 2020.

Visualised in the style of a magical curiosity shop, readers/players can root around in

the environment, opening cabinets, digging into boxes, examining, and reading digital fiction from as far back as 1999. One of the most fascinating elements to this project is the ability to view old work in a completely new way. Curios also offers some re-imaginings of what these poems and stories might look like had they been created using today’s technologies.

Our most recent VR work, Monoliths – a collaboration with Pilot Theatre, funded by XR Stories – immerses participants in the evocative tales of three Northern women through a series of surreal and atmospheric virtual spaces. This project treads a fine line between giving the participant enough imaginative room to visualise the stories, which are told through spoken word poems, whilst also making them feel as if they are existing within them.

Interactivity is gentle and stripped back; during the final sequence, standing on a rocky beach at sunset, you’re ‘handed’ small, beautiful stones to examine as the poem flows.

A common thread throughout all our work is a sense of immersion – we look to create portholes into self-contained, often short-lived worlds; dream-like environments where text manifests and stories are told in all kinds of intriguing and unexpected ways. It’s taken a long time for us to develop our voice and approach – and of course, it’s still evolving. Methods of writing are changing but so are methods of reading. That’s what we’re seeing right now, through our current projects.

Links

https://dreamingmethods.com https://dreamingmethods.com/portfolio/monoliths

Annie Murphy Paul

Operationalizing the Extended Mind

In the more than twenty years since the publication of the seminal paper by Andy Clark and David Chalmers titled The Extended Mind [8], the idea it introduced has become an essential umbrella concept under which a variety of scientific sub-fields have gathered. Embodied cognition, situated cognition, distributed cognition: each of these takes up a particular aspect of the extended mind, investigating how our thinking is extended by our bodies, by the spaces in which we learn and work, and by our interactions with other people. Such research has not only produced new insights into the nature of human cognition; it has also generated a corpus of evidence-based methods for extending the mind. My own book—also titled The Extended Mind [9]—set out to operationalize Clark and Chalmers's idea. In this talk, I will discuss the project of turning a philosophical sally into something practically useful.

https://anniemurphypaul.com/books/the-extended-mind/

image

Apurva Chitnis

Journal : Public Zettelkasten

The future of knowledge management on the internet

These last few weeks I've been building my own Zettelkastenh. It’s an intimidating German word, but the idea is simple: when you’re learning something, take many small notes and link these notes to one another to create a web of connected notes. This is more effective than taking notes in a long, linear form (as you might do in Apple Notes or Evernote) because you can see the relations between ideas, which helps with your understanding and retention.

image

Figure 4. Zettelkasten. Clear, 2019.

The core idea behind Zettelkasten is that knowledge is interrelated — it builds off one another, so your notes — your understanding of knowledge — should be too. Wikipedia is structured in a similar way, using links between related pages, and in fact even your brain

stores knowledge in a hierarchical manneri.

Limitations today

But as powerful as they are, Zettelkastens implemented today are limited in two ways: firstly, they are only used for knowledge-workj, and secondly, they only represent knowledge in your mind, and no one else's. These limitations are debilitating to the potential of Zettelkasten, and more broadly how we communicate online.

I believe that not only knowledge, but all sentiment and expression is interrelated.

Further, my knowledge and sentiment is built off of other people’s knowledge and sentiment, ie it extends beyond myself.

For example:

and James Blake could easily see our covers by following edges along the graph.

These are just a few ideas, but if we each made our Zettelkasten public and interrelated to one another, then there would be as many interaction patters as there are people in the world. This would unlock new forms of consumption and creation that are not possible today.

This knowledge and sentiment graph could be queried and accessed in a huge number of ways to answer a broad range of questions. You could effectively upload your brain to the internet, search through it (and those of others), and build on top of everyone’s ideas and experience. This is a new way of representing knowledge and expression that goes beyond the limitations of paper and Web 2.0: it allows us to work collaboratively, in ways that Twitter, Facebook and friends just aren’t able to offer today.

Implementation

What data-layer should be used for storing this data? A blockchain is one idea: the data would be open and accessible by anyone, effectively democratising all knowledge and sentiment. It would be free of any centralised authority - you could port your knowledge in whatever application you wanted to use, and developers could build whatever UIs make most sense for the task at hand. Finally, developers could create bots that support humans in linking and connecting relevant ideas to one another — a boon for usability efficiency and discoverability.

Challenges

The biggest challenge with this idea, if we use the blockchain as the data-layer, is that the information a user would create is public and permanent. You may not want the world to know you believed something in the past (eg if you were a fan of X in your youth), but you cannot easily delete data on the blockchaink. You could, however, add a new note to explain that you no longer believe some idea — this would be particularly useful to any followers of yours, who now have additional context about why your opinion changed.

Similarly, you'd be revealing all of a piece of knowledge or none of it; with a rudimentary implementation, you couldn't partially reveal a belief to just those you trust. Zero Knowledge Proofs might be a fruitful solution here.

The second big challenge is how to present this data visually to end-users. Solving this particular challenge is outside the scope of this article, but it suffices to say that linear feeds

(such as Twitter or Facebook) wouldn’t work well. If these barriers could be overcome, public Zettelkasten could not only be how we represent knowledge online, but also how we understand ourselves and each other in the future.

Barbara Tversky

Journal Guest Presentation : Mind in Motion

https://youtu.be/RydjMrG9sDg?t=714

So, thank you for inviting me. I have far too much to tell you. And I’m trying to tell it through visuals not in the book. The talk will be like pieces of hors d'oeuvres, so a bit disjointed, but they're meant to set up talking points so that you can ask questions, or discuss things. I should say that you're more punctual than my students, but my students are far more geographically dispersed. Kazakhstan, China, Korea, Japan, just everywhere. And so, "Zoom" does enable that kind of interaction in one class.

I’m going to share a screen and I want to, before I show pictures, I just want to say a bit, without a picture, of how I got into this field at all.

I’m a bit of a contrarian. When I was a graduate student, people were reducing everything that people thought about all representations of the world to something like, language, or propositions. And my feeling, looking at that, and I did look at all the research at the time, is language is efficient, decomposable, it has all kinds of advantages. I rather like it, I’m using it right now. But it seemed to me that language couldn't begin to describe faces, scenes, emotions, all kinds of subtleties. And then, I started thinking that space is half the cortex. So, spatial thinking must be important. And by spatial thinking I mean the world around us, and the things in it, including our own bodies, other people objects, scenes. And that special thinking evolved long before language, which occupies a rather tiny bit, but important place in the cortex, but came much later, and is in less connected with the rest of the cortex. And then you think, anyone who's been a parent, or owns a dog, that babies, and other creatures think and invent so many marvellous things without language. And for that matter, so do we.

So, I got interested in spatial thinking. This is some of the early ways that we communicate. Gesture arises in children long before language. And in fact, children who gesture quite a bit, speak earlier. Games where we're imitating each other, taking turns, alternating what we're doing, this kind of interaction in games, rolling the ball, rolling it back, it builds trust. It sets up conversation, which is, you say something, I say something. So, it sets up cooperation, conversation, and many other things. This is done early on and communicated by action, not by actions of the body. And reciprocal expressions on the face it

isn't communicated by language. So, I’m going to jump lots of jumps, and I want to talk now, because you're interested in text, about kinds of discourse.

I want to jump again, I already talked about how communication begins in humans and other animals as well. Through the body, through the face, through actions. And I could talk, at this point, about mirror neurons, but I’ll skip that, just leave it as a teaser.

The earliest human communication and probably human includes Neanderthals and other hominids goes back at least 40.000 years. It keeps going back, as this was a discovery in the last few years. You can see hands there, there are animals there. It's from Sulawesi. And, as I say, these are being discovered everywhere. Sulewasi, 40,000 current oldest cave art:

image

This is the former oldest map, 6000 years. It shows two perspectives, an overview of the paths and rivers, and a frontal view of landmarks. Linguists don't like this. Geographers don't like two perspectives. But people seem fine with them. Ancient Babylonian clap map:

image

This is the current oldest map, it's about two inches by one inch. A stone. It shows the surroundings around the cave where it was found, some 13.000, 14.000 years ago. And it's tiny. So, it could be taken with you, to guide you on going back. Map on stone block, southern Spain, 13,660 years ago:

image

A map of the sky going back 5000 years. Sky Map of Ancient Nineveh 3300 BCE:

image

This is a valley in Italy, it's a drawing of a petroglyph. Again, two points of view. Bedolina, Italy, 2000 BCE:

image

Eskimo maps. They were carved in wood, very beautiful, carried on canoes, they showed the outlines of the coasts. And they floated, in case they fell in the water. Eskimo Coastal Map:

image

South Sea Islanders Map, probably familiar to you. Shells representing islands, bamboo strips, the ocean currents, which are like the highways of the ocean. And at least some of the people that were trained and carried these with them, 2.000 miles on the open ocean, at least some of them returned home. South Sea Islanders Map:

image

A map by North Coast Indians, showing the various settlements on their hands. A map by North Coast Indians:

image

Now I’m jumping again to depictions of scenes. Again, going back 40.000 years. Chauvet. Going back even farther in Sulawesi, although I’m not sure. Chauvet Cave 40,000 years ago:

image

This one I especially like, it is in the book. It's a petroglyph on the left, and the drawing of it on the right. And it's showing two suns in the sky. Quite remarkable what could account for that. An Indian astronomer did some history on it and found that, at about the time they could date the petroglyph, there was a supernova. And it was such a remarkable event that someone inscribed it in a stone. Stones were, in a way, the newspapers of antiquity. Supernova: 4000 BCE Kashmir:

image

Here's another example from the U.S., a whole valley full of these. It's called Newspaper Valley, and it has many of these petroglyphs showing events. 'Newspaper' Valley:

image

So, events in making bread in a tomb in Egypt. Bread making in Egyptian toomb:

image

Events in the Trajan Column. Trajan Column:

image

Now we have calendars, they also go way back. Some circular. Some tabular. Calendars:

image

All these forms become important, but I won't be able to talk about them. Then we have number. We have tallies. Again, you can find them all over the world. It's not clear what they're representing. Incised ochre tallies Blombos Cave, South Africa 70-100k:

image

But having a one-to-one correspondence from a mark, to an idea, to an object, to people whatever they were counting, moons, is a rudimentary form of arithmetic that was again, inscribed in stone.

So, ancient visualizations represent:

These are all important concepts, and you will find them in the newspapers, journals, magazines of today. And they're so important that the brain has specialized areas for processing them. And what's extraordinary about all of these is, they can be spatialized. So, this is part of my argument that is, spatial thinking is foundational to all thought.

Early communications began as pictographs. In some way, you can still find... Well, there was a civil war colonel who collected these during battles and then, Dover later printed his findings. They're quite remarkable. This is a love letter between the two animals. On the left are her totem and her lover's totem. And it's a map leading him to her tepee, and she's beckoning him there in the map. Love letter:

image

In the 18th century, the age of enlightenment, we finally get graphs.

image

Figure 5. Trade-Balance Time-Series Chart. Playfair, 1786.

Because the early visualizations, that ones that I showed you, except for time, were more or less things that were actually in the visual spatial world. But more abstract concepts, like balance of payment and graphs, developed only in the late 18th century, and they began to blossom. So, Diderot, I would love to walk you through this, it's a way of teaching diagrams.

The top half is a scene, which would be familiar to 18th century eyes. The bottom is a diagram. It differs from the top, and things are arranged in rows and columns. There's a key. Lighting is used not naturally, but to reflect the features of the objects. The objects are sized. So, you can see them in the diagram, not in their natural sizes.

image

Figure 6. Pinmaker’s Factory. Diderot, 1751.

So, this is a visual way of teaching people what a diagram is. In fact, by now, we've diagrammed the world, and we've set up where different kinds of vehicles, pedestrians can go, where they can't go, where they can park, when they can go, and it moves us through space in an organized way. But we've really diagrammed the world.

image

Figure 7. Diagrammed the world. Tversky, 2022.

Graphics augment cognition, they:

Animations are compatible with thought, in the sense that, they use change in time to convey change in time, but they're hard to perceive. They show but don't explain. And most of the things that are animated, when we talk about them, chemical processes running, climate change, we talk about it in steps. So we think about these things in discrete steps, not in this continuous way. Which, as I’ve tried to show you, is difficult anyway. I’m sure good animations can be designed, but it's trickier than some people think. And obviously, animations appeal to the eye. We're all, in one way or another, addicted to movies and music.

Comics. I want to jump to comics because they show all kinds of lovely ways of expressing meaning that are rarely seen in traditional graphics.

Whether they're infographics or graphs and charts. So, one thing comics artists can do, is use space to segment and connect time and space. Here you get an overview of the scene, and then you get the action superimposed on it, in frames on it. This was used by the ancient Aztecs, not just modern comic artist.

image

Figure 10. Gasoline Alley. King, 1918.

Visual anaphora. You can get from one frame to another following this red book. The "New Yorker" cover is not just showing writing, but it's a visual story and a pun.

image

Figure 11. New Yorker Cover. New Yorker, 2008.

Visual anaphora provides continuity over changes in space & time. So the book ends up in a trash can being burned by homeless people to keep warm, and the verbal name is "Shelf- life." But you can follow it because of the anaphora provided. Something from frame one is preserved in frame n-plus-one. And so on, so that you can follow the continuity. As for good stories and good movies, often you want to break the continuity to create suspense.

Here, following the eyes, and the pointedness of the frames allows you to go back and forth and understand the David and Goliath.

image

This one's a little harder. It's a beautiful book called "Signal to Noise" by Neil Gaiman and illustrated fantastically by Dave McKean.

image

Figure 12. Signal to Noise. McKean, 1989.

It's showing an aging director, and he's actually dying of cancer, and he's got photographs from many of his productions on the wall. You can see he's thinking. And it shifts perspective

to what he's thinking about. He's looking, and you can see the perspective switch between the man in the blue coloured shirt and what he's thinking about, as he watches, looks at all of these frames, and then finally, he can't stand it. "Stop looking at me!"

image

Figure 13. Signal to Noise 2. McKean, 2022.

So, he's both reviewing his life and haunted by it. And again, it's conveyed visually. Steinberg, the master, a conveying peeping toms through a mirror that reflects the guy watching from the opposite apartment.

More Steinberg. A pun, "Time Flies." More comics (only one shown).

image

Figure 14. Don't. Steinberg, 2969.

So here, there are so many devices, visual spatial feed metaphorical, or figures of depiction.

image

You have puns here, polysemy, figure/ground. I want to draw your attention to the old- fashioned telephone cord, which some of you, at least, will remember. So this woman is drawing those other people into a conspiracy by calling them on the phone. So, the phone cord is a literal phone cord, it's a metaphoric phone cord, drawing them into the same conspiracy. It also serves as the frames of the panels. So it's doing triple duty. It's something kids can get right away. Like gestures, you get it almost without thinking. It just pops out at you. So, a beautifully crafted device.

Figure/ground. You're seeing the murder, and the noise of the murder is coming through in those black figures that are superimposed on the actual scene. And the black and white drawing is emphasizing the stark brutality of the punctual murder. You light out a life in a second.

image

(Not illustrated) More Steinberg. "Canceling Thoughts." Again, I don't need to tell you. Visual juxtaposition. This is another Gaiman, McKean cooperation. A child is at a birthday party. You can see on the right, they're playing musical chairs. Here, the child is not interested in the birthday party. So, goes out, and talks with an uncle, who told the child the story of the Saint Valentine's Massacre by Capone, where he tied his enemies up on chairs and killed them. Shot them one by one. So, you have the chairs there with the men chained to the chairs juxtaposed with it with the birthday, which is a little bit of a brutal game because one child is eliminated at each round from musical chairs. So, that juxtaposition of chairs, again, is a stark reminder of the comparison between brutality of children, and brutality of adults.

(Not illustrated) Okay, metaphor pun. "Puppet Governments," Feininger. This is Winsor McCay, a brilliant comics artist. This is from the early 1900s, New York. Parts of New York still look like that. And this is, of course, the rat race, running on a treadmill.

(Not illustrated) This is a dream, another one of his where a dream transports the child and

then dumps the child back in bed, the way dreams end before they should end.

(Not illustrated) This is onomatopoeia rhythmicity. It's showing a chase. And by putting the panels on a diagonal, showing the speed of the chase. "Coming out of the frame." The first pig, whose house was blown up, comes out of the frame, and talks to this second pig inside the frame, and says, "Get out of there. It's safe out here." And then, the pigs all go berserk.

They get out of the frames. And the frames are on the floor, and they're stamping on them. So, this version of the three little pigs is a riot. And again, kids can get it.

I’m going to end with another Steinberg. Steinberg drawing himself. Again, a visual way of understanding drawing portraits and so forth. So, I’ve raced through a lot, and I haven't covered everything that Frode wanted me to talk about.

image

Q&A

Brandel Zachernuk: Hi, Barbara. Thank you. This is brilliant. I love the fresh ground and the emphasis on the text here. So, my question is not about animations per see, but about progressively recomposed images accompanied by illustration, via the speech of the illustrator. The actual drawing of lines along with narration. Is that something that you've ever studied, or that you would expect to have any particular effect from, in contrast to seeing the completed image of its entirety?

Barbara Tversky: That's what we do in classrooms, right? I mean, that was the oldstown, I know I have many mathematician friends who still insist on going on the board as they speak. And watching it unfold, and the rhythm with which it unfolds, and the verbal accompaniment at the same time, I think is very effective. So, what you're pointing to is one way that animations can be made more effective. They unfold in time with narration and explanation. And they add a bit of drama. What's going to come next? So, I think that's great. And at the back of my head, when I was thinking about this is: What can you do on text? And it will amplify it. And that is exactly the sort of thing that one can do. And it is like a comic, combining language, and symbols, and sketches, and so forth, all at once. There are beautiful examples on the web. Just lovely examples of people using that technique. And I’ve been teaching comics for probably 20 years, on and off, not quite. And I see a younger generation, growing up with that medium, and drawing and writing at the same time. So, I think people will get adept and talented at doing that, at illustrating what they think, while they're thinking it. And I think that's just great. It gives people an extra way of expressing themselves that's quite poetic, or can be quite poetic, but it's also wonderful explanations. So, yeah. I’m a real fan of that.

Brandel Zachernuk: I’m curious, have you ever seen Ken Perlin's work at NYU around at being able to draw in Virtual Reality?

Barbara Tversky: I was an early fan. And Ken, as a friend, and I was an early fan of his, exactly on chalk talk. And in fact he and I and Steven Feiner, whom I work with, and Hiroshi Ishii at MIT, the four of us put in grant, after grant, after grant to expand, and NSF didn't like it, and didn't like it, and didn't like it. So, a real disappointment for all of us. Because classroom teaching that way is, again, natural and what Ken's animations did is, you're talking about a pendulum, and then it could animate the pendulum depending on the length of the string, and so forth. So, being able to speak, and use mathematical mathematics driven animations, I thought was super! Just a super way of understanding. So, yes.

Bob Horn: Oh, hi Barbara. Of course, the question I will ask will not be a surprise to you. I’m very interested in, and wonder the degree to which you've done research on the textual

elements intimately integrated with the spatial elements. That is most of what you've just presented has been the spatial aspects of the kind of visual language communication that we are all using. In addition, the diagrams and comics rely, it seems as much as maybe, 50/50 or even more sometimes, on the words, and how the words are integrated with the visual elements. And that's been something that I’ve been very interested in, particularly in diagramming. So, I’m wondering if you've gotten your research to go in that direction, to analyse, and find out how text is integrated with the visual elements?

Barbara Tversky: We've done a lot of work that skirts around that. We've shown that you can go back and forth between visual descriptions of maps, or many kinds of diagrams, and the visual spatial. That the same underlying concepts are driving both of them. But that the visual form, for example, root maps is usually not for everybody. But usually a more effective way of communicating that. It's a long story. But I agree with you that in many comics, what's going on is in the words. I think they're poor comics when they're talking heads. I talk about them as talking heads. If you look at Larry Gonick's science and history, if you look at his comics, they're cheap, 10, 12 dollars each. They're absolutely wonderful. His book on statistics is used as a textbook in many places. At one time, even Stanford. And he's a neighbour in San Francisco, and his books are absolutely fabulous. He was all about dissertation mathematics at Harvard. A self-taught cartoonist. And he began doing, essentially, visual spatial textbooks on different forms of math, science, history, sex, environment. He's got bunches of them. He always works with a domain expert. We've appeared together on many occasions. And once I had the temerity, stupidity to ask him," Larry, what do you put in pictures? What do you decide to put in pictures? And what do you decide to put in words?" So, he's very tall, I’m not, and he kind of looked down at me, with his full height and said, "Barbara, I do everything in pictures. What I can't do in pictures, I do in words." And he's incredibly inventive of what he does in pictures. I have my students go through one of his books, they each choose one, and they go through looking for the visual spatial devices, and every year they come up with things that I haven't thought of. They see things I don't. And it's usually the visual spatial telling the story. So he's an excellent example of that. There are others. And Scott McCloud and his book "Understanding Comics" is a gem. It's a gem about stories and narratives, not just about visual stories. But he does talk about the roles of language. And here you'd have to add symbols, like arrows and mathematical symbols. You have to add in comics the way the font, the size of the font, and all the squiggles that are added that give you information about movement, and mood, and smell, and sound. So, you can enrich the depictions in so many inventive ways. And what I’ve been trying to do is urge people who make charts, and graphs, and infographs for science books, to learn those techniques. And, as I say, I’m just pleased to watch younger people. I have a

sample of eight grandchildren, and the grandchildren of many of my friends, and watch them latch onto graphic books, and see the graphic books that are doing so much in the depictions besides text boxes, and I’m very optimistic about people coming up with really creative ways to do visual storytelling. So, long answer. Sorry.

Ben Shneiderman: Thank you, Barbara for a wonderful, intense, movement through all the space of these wonderful ideas. I think it aligned very well with Bob Horn's visual language thinking, which has been an inspiration for me as well. But one of the charms of your book was that, it went beyond the spatial and the visual, to the idea of mind in motion. And could you say more about dance and body movement? You talked about gesture, you talk about hands and how people communicate, express themselves, learn by being in motion. Tell us more about that side of the story.

Barbara Tversky: So, it means speech is in motion. And speech is accompanied by prosody. I emphasize certain words, and de-emphasize others. I can give you my mood by, I can sigh, I can sigh short or long, and that's motion, and it's just in our voices, and it's communicating so much more than just the words on page. Although, text again is words on page. And there are ways of amplifying text by putting "dot, dot, dot, dot," that capture some of that. And sure, our bodies indicate, I mean, I said gravity, if I’m feeling good, I’m standing straight and strong, and if I’m in a depressed mood, I’m down, and we can pick that up in others in a second, especially people we know. We can pick it up from hearing their feet behind us. What kind of a mood they're in. Who it is. We know these cues. Again, they're active, motion cues to people. They're very simple, not as complicated as dance. But really creative, and wonderful dancers, and choreographers can create absolutely amazing displays of emotion, human interaction, human non-interaction, individual feelings from the way they do dance.

They're uncanny. And you see that in theatre, they often hire choreographers to orchestrate how people are moving, and talking, moving their arm, agitated or smooth. So, yes. Huge amounts of human meaning gets conveyed through the motion of the body.

Ben Shneiderman: I do think really that deserves on a much-expanded part, just the idea of walking together, being in a forest, moving forward, sailing on an ocean, flying through the air, walking up a mountain. All of those to me, they're not just physical experiences, they're cognitive experiences as well. And they enrich us. And I found your book really opened my mind and thinking to the realization of, how much the body plays a role. Which your book adds so much to enrich the dimensions of analysis, which have been, as you point out, largely linguistic moving towards spatial, and visual, and maybe auditory. But the idea of moving towards body motion that was really, to me, a highlight.

Barbara Tversky: Thank you. I was limited in the book by what there was research on. This

is the problem of being a scientist. You don't want to go too much beyond research. I use a lot of examples, but the examples are all founded in research findings. But I couldn't go off the way until now. But really, if you think about it, every organism, even a virus, needs to move in space to survive. And the basic movement is approach or avoid. And those are replete with emotion. You approach things that you're attracted to, that might do you good, that you want. You avoid things that have negative valence. So that, from the get-go, movement is for survival. Even grasses have to move toward the sun or away from rain in order to survive.

Even things rooted in the ground. So, we all have to move in space to survive. The basic movement is approach or avoid. And those come with emotions, which I think underlies some of Damasio's claims, although he's got brain there too, without emotion nothing happens. And emotion and motion in English and other romance languages have the same root. I don't know about Germanic, or Chinese, or other languages. But they do have the same root. And we talk about being moved as an emotive response. So, I do think anything that has to do with life, really does derive from motion in space.

Ben Shneiderman: Exactly. But look at how they impact on design, or even the "Zoom" in front of us. Some people have just their text name. Some people have a frozen image. Others are live and animated. I like to be Zoomed standing up, so I can be freer to move around. And I think I express myself better, and I can reach out, or I feel the other person better when they are animated, as well. Anyway, thank you.

Barbara Tversky: Yeah, absolutely. I’m frozen in place in a classroom, I can't stand. But when I move around the classroom, I’m going the whole width, and sometimes the length of the classroom. So, I can't do that on "Zoom," it drives people crazy. So, I plant myself in a chair. And when I’m listening to "Zoom," it's often on my phone, walking. So, a longer story. "Zoom" has advantages and disadvantages. Like anything.

Frode Hegland: Yeah, absolutely. You mentioned the question of other languages like Germanic languages. In Norwegian, "følese" is the word for feeling. But that can be, you can touch, it's also touch feeling, as well as an emotional feeling. But the funny thing is that, "bevege seg," which means movement, is also what you would say if you were emotionally moved by something.

Barbara Tversky: Yeah, nice. I should ask my students who are in Japan, or China, or Kazakhstan, or Malaysia what their languages do. Yeah, thanks.

Peter Wasilko: Yes. Have you given any thoughts to the evolution and interplay of note- taking? And I was recently reading "Lines of Thought," (INDISTINCT) typesetting and textbook design. And also, do you have any thoughts about interactive fiction systems?

Barbara Tversky: I don't really know much about the history of textbooks. I do know there are a huge number of experiments trying to compare text and diagrams or graphics in one way. And many of them are unsatisfactory, because you can have good text, or poor text, and you can have good graphics, or poor graphics. And I think people in the info design graphics community have been developing standards, or best practices. For good graphics is complicated, because it depends on your audience, and what you're trying to convey. You can't have absolute principles like you can for font size. And there's a little bit of work on textbook design and what it should look like in good text and poor text. But the range, in both cases, seems so great. And studying it would take an historian of sorts, to know the evolution, the development of those things. And it would have to go across cultures. What happened in the East, as well as what's happened here. So, I think that's way beyond my expertise. but I don't think I answered all the parts of the question. Probably I can't answer them.

Frode Hegland: The second part was on interactive fiction.

Barbara Tversky: Ah, interactive fiction. I don't know whether people have done research on it. I know my kids, who are now parents themselves, loved it as a kid. They weren't around when I was a kid. But my kids loved it. And then, of course, the computer games that are built on storytelling. People get incredibly involved in. So, probably those designers know a great deal. They have a lot of heuristics and rules of thumb for that. So, the one kind of discourse I didn't put up is conversation. And that grounds interactivity. And conversation isn't like a lecture, it's two or more people speaking and no one can dominate. As I’m dominating now, in a normal conversation no one can dominate. What you get in that kind of interactivity also, is little bits of information. Bite size. That you can consume, and it arouses a question, and then there's another bit of information. And interactive graphics do that. They allow you to involve yourself in it, in little bits, where you can get background where you need it, or where you want it. Not all of us want, you know, there's that old joke about a book about penguins that told me more about penguins than I ever wanted to know. So, different people will want different amounts of information. And that interactivity allows me to have a conversation with a graphic, where I’m asking bite-sized questions, and getting bite-sized answers that lead to something else. So, I’m building up my own knowledge that way. And I think the interactive fiction can do that. And also add the suspense to it. We started at one point trying to compare comics with traditional graphics, or traditional graphics plus text. Too many things were going on at the same time. Too many uncontrolled variables. And as a cognitive scientist, it's those crucial variables that we're after. When you're a designer, or an educator, it doesn't matter to you what's doing it. The combination is probably doing it. It's in the interaction, amongst those elements. So again, then finding guidelines for creating good

ones becomes difficult, because there are so many moving parts. I mean, like building a city. But nevertheless, we can judge which ones are more and less effective and why. There are times when I want to lecture or a book. There are times when I want that interactivity. Again, I’m not sure I’m getting at your question, but.

Alan Laidlaw: Sure. I’ve got so many questions. Thank you so much for giving this talk. And I had the childish desire, which I still may succumb to, of showing off every reference you made. That's somewhere behind me because I’m a nerd. But that's great, I used to be a cartoonist, it's where I got started and a lot of that came from reading ‘Understanding Comics’ and that was my first entry to like, “Oh, this person thinks like me.” And I’ve never had these thoughts. But enough of that because could go on with many questions. So instead, I’ll throw one that just popped up while you're giving the talk. This may be out there, and feel free to dismiss it. The thinking in context of cave paintings, and sort of where we got started in scribing. The commonality, seems to me, that it's always the physical act of resistance against a surface. And so, I’m wondering about that in context to where a theme has been trying to probe into VR, what that'll be like? Is there any research around how that resistance, that pushing against something to create is different than, or I don't know, is it a class?

Because with VR, we could say at least, there's nothing to push against at the moment. But in dance, there's also nothing to... Well, there are motions, there's creation that doesn't involve resistance exactly, not in the same sense of pushing against something to. Does any of that make sense and is there any work on it?

Barbara Tversky: Yeah, thank you. And I could probably learn a great deal from you, as a cartoonist. I absolutely agree with you. I don't know about research, but it's one of the complaints that people in architecture schools have, that people no longer know how to draw. That drawing on pixels is just very different from using a pencil, or a pen, or a brush. And artists, and calligraphers, and so forth talk about what the thing is, how it's held in their hand, what are the motions they need to do. Cooks. Any of you who cook knows you have certain knives that work well with your hands, and others that don't. Resistance and dance is gravity. And your own body, what it can do. will it stretch enough or not? Does it have the strength? So, that feedback to the body is huge.

Alan Laidlaw: I guess I put it... Sorry to reframe that, the aspect of us versus surface is what I was kind of trying to... The creation always seems to have a surface that's separate from us. Anyway, continue.

Barbara Tversky: Yeah. I mean, I’m trying to generalize that to resistance, and feedback to the body, and the feel that it is when you're dealing with a surface. And again, different surfaces. Just writing, paper makes a difference. Which kind of paper you're writing on? Or

doing charcoal on, or watercolours, all of that. And that, I think, it's more than the resistance, it's the subtlety of your hand movement, and wrist movement, and our movement on that surface, what it takes. And in calligraphy, they practice for years the strokes, and how they make them, and how they twist the brush, and the kind of paper. So, all that interaction with the medium, what it gives your hand. And artists, I worked with a bunch of artists interested in drawing, and some of them had done doctoral thesis, and one of them looked at professional artists, and accomplished artists, and novices on drawing, how much they're looking, and how much they're drawing, and what are the time spans of the interaction. And in artists, it's much longer. They can look and draw a lot. And look and draw a lot. Novices are going back and forth. So, for artists the knowledge is already in their hand of how to translate what they see, this is life drawing, into their hands. And they talk about it as a conversation between the eye, and the hand, and the mind. And if you try to get them to talk words at the same time, they can't do it. The words get in the way. It's a visual, spatial, motor conversation that the words get in the way. And architects say the same. They can talk afterward. Explaining what they were doing from a video, but while they're doing it, they're deeply engrossed in this feedback loop. Does that align with your experience?

Alan Laidlaw: Yeah, to play off of that, that's actually great. And got me thinking that now we have keyboards as our main interface. Which is a sad state compared to the richness of the ideal, the nostalgia, for calligraphy and whatnot. And yet, we have translated our focus into the simple clicking of buttons at a repeated pace and moving a mouse around. We can still get to that flow state, right? Coders do it, etc. So, that gives me hope in the VR space that, even though we wouldn't have a surface to push against to create, we would still find a way to translate it through, I guess, just mainly the feedback, and the style of feedback travesty. The style of feedback would still come through, and we would still have that feeling. I was just wondering if there was something haptic, like in the way that we have gestures. I think Darwin said that, "Every culture does this." Some version of this to say, "I don't know." And if there was something about the creation of man that is pushing against something, and that equals the brain does something different then?

Barbara Tversky: Sure. I mean, the feedback, and the kind of feedback, and the mode of interactivity, and some of that, I mean, VR is trying to add the kind of haptics feedback. and you certainly need it for surgery. And the VR surgery does try to add haptics, because anything you do, as a surgeon, you're relying on that. And anything a cook is making. And it's how it feels, you need that feedback. And the interactivity that comes from touching and moving, you need it for taking care of babies. When you pick up a baby ant the baby is tense or relaxed, you feel it in your hands right away. So, yeah. We need that level of interactivity. Smell is another thing. When I cook I’m relying on the smells to know, I got three or four

pots going and I’m relying on the smells to know, "Is this butter about to burn? So, I better lower the heat." Or "Is the rice bubbling too much? Better lower." I’m monitoring those activities with many senses. And some of it, we become completely unaware of. We just respond. The way walking, right? Walking or running. We're not aware of all the movements. Or typing. Once we had to be aware, but by now we don't, it's automatic. And there are benefits and costs to that, as well.

Frode Hegland: I just wanted to say, I think that interaction was really nice to hear because, for so many decades, we have had this nonsense that interaction should be invisible. They should absolutely not be invisible, depending on when you need them. If you're walking on the ground, as you said, even with shoes you can tell what kind of ground you're walking on. That is really useful, especially now in winter, when it may be icy. So please, let's highlight how we use our bodies and interactions. That was wonderful.

Luc Beaudoin: Hi, Barbara. I’ve got a number of background projects. They're just background projects in spatial cognition. I’m associated with Aaron Sloman in Britain. I don't know if you know him. He has a project on spatial cognition, the evolution of spatial cognition from an AI. Aaron Sloman, you know him? There are two Aaron Sloman, one is the psychology guy, and the other one is the philosopher.

Barbara Tversky: No, I... The psychology guy was a student.

Luc Beaudoin: No, this one is technically a philosopher, but he is an AI person. But anyway, I’ll jump to something that's not with Aaron’s project, but another interest of mine is mnemonics. I’ve been doing visual-spatial mnemonics myself from a scientific perspective, I miss the beginning of your talk but I take it you've argued for the primacy of motion.

Basically, motion coming before language and evolution. And there are various arguments for that. So, that makes a lot of sense. I see, as you do, the spatial cognition, spatial and movement cognition being fundamental. So, as such, I would think that for mnemonics it would be helpful. So I, myself, when I’m memorizing lists, you know that lists are the hardest thing to memorize. But if you can turn them into a visual-spatial sequence. And I’m not a dancer, so I’m not very good at the visual-spatial motion thing. But I found that if I can use a gestural mnemonic, then I can remember these lists. So I remember, Jordan Peterson has these 12 rules in his first book and I thought, Okay. Well, how do I memorize that?" I'll turn it into a little bit of a dance and the whole thing came out within two repetitions. It was quite powerful. But I haven't actually delved into the science of this. But it's something I thought, "Well, if nobody's done this, I want to do it." Are you aware of research on using gestures for mnemonics? For remembering? Apart from drawing, I know that there's research

on drawing, how that helps remember stuff. Actually, I’m more interested in imagined gestures, because I don't think you need to do it. We know that in sports, athletes often will imagine themselves doing things and that helps them execute the behaviour and practice. So, there's your question. Imagine gestural mnemonics.

Barbara Tversky: So, a visual practice, or visual-spatial practice, visual motor practice for divers, golfers, or whatever does help. It helps mainly in sequencing. It doesn't help in the fine aspects of the motion. Real practice is better than imagining practice. But imagine practice is also effective in the absence of real practice. You can do it on the train. I remember, it has happened to me several times, on the New York subway, I see singers with scores in front of them, and they're imagining the music. So, the part of the method you're describing is one of the oldest in the world. It's the method of loci, that was invented by the Greeks, Romans to remember their long orations. They would imagine themselves on a walk through the Agora marketplace and put a portion of their oration at each of those places and then imagine that. So it links things together in an organized way. You still have to form that association between the place in the marketplace, and what you want to remember. The same would be true of gestures. When I was learning Latin ages ago, there was a whole set of what essentially were cheerleader exercises for remembering "amo, amas, amat," and you could go through it for real, or you could go through it gesturally. So, those things can work for some people, and it's usually for meaningless information. Meaningful information it's better to link through the meaning, but images will work. This famous mnemonist beautiful book by Luria, "The Mind of a Mnemonist," he certainly remembered himself going through walks and placing images. Again, you could place gestures in the same way. I mean, it can all be effective, what works for one person. And people rediscover these mnemonic devices. Every 10 years, write a book, it's a bestseller, and 10 years later, the field is ripe for it again. Diet books tend to come out a lot faster. I think more people are worried about their waistlines than their memories. But there are those advice books and they would include motion and gestures as well. We've done a number of studies, many on people learning complex material, like in how a car break works, or an environment. And as they're learning, they're reading text, they're gesturing. And the gestures are making a model of what they're learning. So, they're putting down dots and lines for the descriptions of the environments. And when they go to recall, they make those same movements again. So, it's clearly helping them recall as well. And if they gesture both at learning and at recall, they remember much better. And these are spontaneous, the people aren't even aware, really, that they're gesturing. We don't tell them to gesture. The gestures come from their body. Everybody learns them in different ways. Gestures, unlike words, aren't decomposable. And you could see that with conductors. You go and watch the same concert with different conductors. They're gesturing very

differently. The orchestra can respond in similar ways. So, that visual-spatial language of the conductor can be quite different. We went to the opera two nights ago, the guy was dancing up and down and he was a joy to watch. And there's research showing audiences respond better to conductors who jump up and down. There's a famous video you can find of Leonard Bernstein conducting, I think Mozart, some classical piece, with his eyebrows. He had very expressive eyebrows. Nothing but his eyebrows. Now, they were well-practiced. But (INDISTINCT) and if you want to watch a really gymnastic conductor, watch him. I haven't seen him in years but he was a master. And there were (INDISTINCT) using the motion in very complex ways to guide the music. And it makes a huge difference.

Luc Beaudoin: Okay, can I squeeze in another question? I’ve often thought people who learn pictorial languages, or languages with calligraphy, that they would basically have better memory for concrete words, as well, because they can actually go through the gesture in their head, so it kind of adds to it. Do you know of any research on that?

Barbara Tversky: There's research on having more than one code for memory. If you have a verbal code and a visual code, you're going to be better at remembering something, because you have more retrieval cues. And if you add a motor code, which could be gestural, you'll have even more. If you get too many you might get confused, and it might be hard to construct them. But having more retrieval cues for the same bit of memory does work. So, drawing, imagining what something looks like, imagining how you would interact with it, all of those things can enhance memory, and there is plenty of research on that.

Luc Beaudoin: But not specifically on people who know calligraphy, or who do calligraphy? Barbara Tversky: Some of that is going to be content-specific. Radiologists, who are trained to look for one kind of thing, like breast cancer, might not be good at broken bones. So, some

of it is going to be quite content-specifically. The particular patterns of pixels that tell you

that there's cancer, are going to be different from the particular patterns of pixels that are going to tell you it's a break. So, the movements for calligraphy are to make characters, they aren't to make images of people. Although, plenty of calligraphers could do both. Some of it is going to be content-specific, and some of it is going to be more general. And there you need to look at the specifics to know the answer.

Luc Beaudoin: Thank you very much. It's a pleasure meeting you. I cited you in my 1994 thesis, I counted four times.

Barbara Tversky: Okay. Thank you, thank you.

Brendan Langen: Hi, Barbara. Thanks so much for the talk, this is really neat. And as a funny aside, I’ve recommended your book to pretty much all of my friends who've recently become parents. I think there's so much in the first few chapters, where you just lay out how

children learn, and how to create trust. You mentioned some of that. I’m really curious about how some of your findings can come to life in some of our software tools? So, there's quite a movement going on in some of the knowledge creative tool space. You can think of things like "Notion," or maybe "Sigma," or even "Roam" research and other notebooks. What opportunities do you see for embodied cognition and spatial thinking in our knowledge tools?

Barbara Tversky: Oh, so many. And then, they'd be specific. But thank you for the recommendation. I keep thinking and saying to publishers, "Somebody needs to write a book for new parents, and what to watch for." From new-borns, because until children speak, I think parents aren't aware of the huge cognitive leaps that children are... Because they're just too subtle. And if you learn what to look for, it adds to the already thrill of having a baby.

And I don't really have the tools and the background to do that, but other people do. Yeah. I think there are so many opportunities for adding visual-spatial and embodied, what your body is doing. I mean, gestural interfaces have already done that. They've ruined my thumb. And I take pity on the people that have been exercising their thumbs from very young ages, because of what's going to happen to your thumb when you get to be my age. And voice interfaces may help them, but they have other disadvantages. And sometimes people ask me, I have worked with people in HCI, and computer graphics, in AR, VR, and I’m really enthusiastic about all those media. Some of the work we did with AR was trying to make people's interactions within finding their way in an environment, or repairing, or assembling something, as natural as finding your keys and opening a lock. So, there were ways of guiding your body to the right place. First, by having a virtual tunnel to guide your body to the right place, and then guide your head so that your eyes are looking at the right place, and then guide your hand to where you should make the motions. And then, it becomes as natural as doing something that you've been doing a thousand times other than doing something new. And so, that's one example, but I think there is a huge number, and I’m really excited about what are the things you guys can do, and how they can make them more natural and comprehensible on the input side to people. Maybe you have thoughts. Because there are specifics you're working on.

Brendan Langen: Yeah. Well, you just kind of hit on something that might make sense. There's been some talk in the chat about these findings for education. And I can almost imagine a crossover with a tool like "Figma," a design tool for early-stage designers. And if you can guide them through the process, that is helping them create something that's more stimulating or sound in its interaction design. I could see that being a huge advance. Really curious to keep seeing where this research leads. Thank you so much, I appreciate the time.

Barbara Tversky: Yeah. Probably in the late 80s, early 90s, there was a Shakespeare scholar

at Stanford, who was designing something that would stage Shakespeare for students. And that was prescient but close to what you're saying. And, yeah. I think you can go a long way. One problem is scale. And in there, maybe VR is better because you can get things at scale. I mean, same with architecture. But, yeah. Tools that can allow me to imagine things that would take forever to create. And therefore, create better. Would be phenomenal, absolutely phenomenal.

Brendan Langen: That's really interesting. Almost like bringing along a "now" sentiment into the mix, where something that takes so long to build, is often outside of the reach of what we can comprehend. That's really neat.

Barbara Tversky: And on education, I want to just put in a small plug for some research we did with Junior High Science Students. We had them learn molecular bonding, and then half were asked to make visual explanations, and half were asked to do the normal thing you do on a test. Make or take notes someone raised at verbal explanations. And first, we tested them after they learned it, which was several days in the classroom. And the two groups were equal, we divided them into two groups. After creating it, all the groups improved without new learning. So, the process of making an explanation consolidates the material, and makes you question, "How could this have happened for an explanation?" So, both groups do better. But the group that made the visual explanation did way better than the group that made a verbal explanation. So, this is natural for science, because science is so visual-spatial, chemical bonding. But their diagrams were so different. Some had sharks grabbing electrons. Some had stick people giving them. They were adorable. And you can do it for history, you could do it for a Shakespeare play. What are the relationships of all the characters? What happens over time? I discovered my father's old version of "Anna Karenina" and I stole it from him many years ago. He didn't mind. The first thing it has is the family tree. He made it to understand all the familial relations amongst the characters, and then all their nicknames.

Because Russians always have tons of nicknames. So that helped me reading it, and he made this. My kids doing "Dungeons & Dragons" years ago, the first thing they did was make a map. Again, from language. And that helped them with keeping track of where they were going in the game. So, education. Yeah. Creating visual-spatial representations of women's drawings is one form, they're easy, they're cheap. But doing it in a computer interface might work as well. Sometimes I ask, "What does all the technology add over pencil and paper?" And I think it's an important question to ask.

Brendan Langen: Without a doubt. Well, thank you so much for the exploration there.

Peter Wasilko: Yes. Do you use any mind mapping tools? And if so, how do you approach building a mind map?

Barbara Tversky: I’m sorry, what was that? How do I put what on a map?

Frode Hegland: He asked if you use any mind mapping tools and if so, how do you go about building a mind map.

Barbara Tversky: It probably depends on the content. I mean, you're going to start with a network of sorts. The trouble with the network is usually that the lines aren't labelled. The relationships, you're just labelling that there is an association between "A" and "B," or "B" and "C." And you probably want to do something more demanding, and specify what the relationship is, and then you can cluster things. But it really, in many ways, depends on the content. And you can see those of us who remember learning sentence diagramming, which was essentially a mind map, and I loved it. Or logic. You could visualize in one way or another. So, to some extent, it depends on the concept. But I think, just making networks, you want to go beyond that and talk about what is the nature of the lines. The representations. Are they inclusion? What are they? And then, go about grouping them perhaps, clustering them along common relations. And then you can go hierarchically like a phylogenetic tree. And even a phylogenetic tree has been the basis for a great deal of controversy in biology. Where do different creatures belong? Is there another life form? And of course, one eukaryote and whatever, it was long after I learned biology. So, that particular way of visualizing really helped. Bill Bechtel did great work in an actual laboratory, I think looking at diurnal rhythms. And they were diagramming for themselves almost every day what they were finding. How did they do uncertainty? This is a big issue and a big question. They put question marks. So, they put in relations, the best they knew, and where they didn't know things, there were question marks that meant, "That's an open problem, let's look at it." It really depends a great deal on content. But certainly, there is research showing that kind of mind mapping helps people organize their thinking, and learn, and communicate.

Frode Hegland: Thank you very much. So, I have a question. And that is based on my current passion, or what I think is a realization, but I may be wrong. I feel that, within five years, we'll be living a lot inside VR, AR, those kinds of spaces. And that's kind of a subset of the bigger cyberspace. But a lot of this seems to be about being disembodied, walking around with an avatar that's like a Lego situation. I know, Brandel, I see you're going crazy there. So, my question for you, Barbara, is: How do you see VR with full-body immersion where we really use our senses to the full, in the context, not of necessarily social interaction and gaming and play, but more in the relationship of work?

Barbara Tversky: Five years seems to me, very optimistic. Partly because people get fatigued in AR situations. I get fatigued. There is an uncertainty about moving around when you know you're not really in that space. And so, a lot of that needs to be worked through.

And like "Zoom," they're going to be advantages and disadvantages. And we'll see them as we go. The... I’m blocking on his name at Stanford, the guy doing VR in social situations. There are going to be, I mean, we're going to have to do it. There are cross-national teams doing design, and you can't fly everybody all the time to be together. So, it's going to happen. Yeah. Jeremy Bailenson, who's done wonderful work on social interactions, and those might be the most important for people. If we found that the internet was used to send emails to friends, children, and other people that we love, that was an early use of a massive. They're going to be early uses of VR to be with people we love. And "Zoom" isn't sufficient. I still can't have a grandchild sit on my lap and feel the closeness. But I do think they're going to be increasing uses, they're going to be difficulties encountered and some of them will be overcome. I doubt that we'll all be living in the metaverse, although again, I could be wrong. You need to talk to the 20-somethings that are already playing multi-person games. And it is a bit of a drug. And Yuval Harari imagines that AI is going to replace huge numbers of humans in the way that, the rest of us who are useless will exist as in this metaverse where we'll be, and it sounds a little bit, to me, when those people talk about it, like somebody's conception of heaven. You can have avatars of all the people you love. But then your interaction with them might not be taking place in their metaverse. How do you reconcile them? What age will they be? So, there are all kinds of cognitive and engineering ideas that need to be worked out.

Frode Hegland: I’m not going to let you get away that easily, Barbara. And first of all, Brandel is up after me, and he has an extensive, deep understanding of a lot of this. But let's forget about the "Oculus" and that kind of current stuff. And let's forget about timeline. Let's say that we have a future where we can, like the "Holodeck" in "Star Trek," we can go into it, whether we're wearing something or not, this is very secondary. But there are two things that we can change. The external stuff, the environment, and the things we interact with. But also ourselves. So, even though we do take advantage of all this VR, with our movable hands, a movable head, and all of that good stuff. With your deep knowledge of the human body and the human mind, and completely free of technical constraints, being completely fantasy, what kind of situations, or opportunities, or issues do you see for how we work together on important problems?

Barbara Tversky: First you talked about the individual, then interpersonal. As an individual, I could imagine situations, interactions, environments, objects I’m trying to create. I can imagine them. But until I put them in the world in some way, my imagination isn't complete. And this is why designers draw. They can't hold the whole thing in their head. So, they put it down with tokens or a VR in the world. And that gives you feedback. It makes you see things. It expands the mind in ways that your mind can't do. So, that power of technology is

awesome as ways of expanding the mind, so that I can create better fiction, better buildings, better interactions with people. I can imagine role-play. So taking the things that we already use for augmenting our imagination, like role-playing, like creating prototypes, scripts, stage designs, whatever it is, and turning them into technology, and making it easy to do those things, and explore them, could be awesome. In molecules, combining them in just games. A deep mind has changed the game of Chess and the game of Go. People are now interacting with those machines, studying the games that AlphaGo can do. So, I think that is mind- blowing, absolutely mind-blowing. The social interactions, I don't know how much we want to replace them. Now, there are times when I wish I had interacted with somebody differently. But I can't redo it. I can redo it in my mind, but I can't redo it for real. So, the social interactions, it seems to me, have to be in real-time. Space, we can change. We can all go to Machu Picchu together. Explore it together. Enjoy it together. But we can't replay and redesign. If I had an avatar of someone I’m interacting with, and I could interact with that avatar in different ways, and try out different things, that might help me in my interactions in the future. But I can't replay a real interaction in the way that I can replay a fiction. So, am I getting closer to what...

Frode Hegland: It's wonderful, and very deep what you had to say. Very unexpected, which is, of course, what I was hoping for. Thank you very much.

Brandel Zachernuk: I’m trying to decide which of the two questions I want to ask. I’d love to get you to go to both but I’ll start with just one. Have you done any work on the cognitive differences between writing script with a pen, versus typing, versus dictation for the purpose of producing text? What sort of internal cognitive impact there is in any distinctions that you would draw? Or do you see them as equivalent?

Barbara Tversky: Again, I would think it would depend on the person's adeptness with each of those and the content. One of my former students, Danny Oppenheimer, who does very innovative research, tried to show that taking notes in classes with a computer wasn't worse than writing. And the work didn't replicate. Unfortunately, that happens to a great deal of our research, and I think the failure to replicate means, probably, it works sometimes for some people, and it isn't a general phenomenon. But what I thought, at the time, is when you write it takes more time, so it makes you summarize. And when you type, the temptation to type down words in a row way is probably not the best way of learning. You want to wait, summarize, write down little telegraphic notes. And the other thing that writing allows you to do is array them in space conceptually. In that sense, I think that could help, but it depends, really, on what you want to learn. So, as a learning tool, the only research I know of is Danny Oppenheimer’s, and he did find writing was better than typing on a computer. And there, I

think, it really does have to do with how you attend to the lecture. But that work didn't quite replicate. But I have a feeling that those... I’m now in an ed school, I was in a psych department where you try to get the minimal features that are accounting for something, and in ed school, you throw the whole kitchen sink at something and you don't care about what works. But nevertheless, people are asked, Are animations good? Is writing good versus typing? And people want a blanket answer, and then we say, "It depends." And people don't like that answer. But I’m afraid that is probably closer to the truth. I mean, we're living in a Covid world now, and it's how do you give advice, and when the target keeps changing, and the disease keeps changing, and people are left with the old ones, and then complaining they can't give coherent, clear advice. So then, they toss everything out. Which is the wrong thing too, because there is good advice, it just keeps changing.

Brandel Zachernuk: Douglas Engelbart had a famous thought experiment of attaching a pencil to a brick and calling that a "de-augmentation" because of how much more difficult it would be to write with a whole brick on a pencil. But it occurs to me that, while it would be definitely slower, the words that you would tend to write, as a consequence, would be significantly more momentous and important for you. Only because you remember the effort that would be expended in it.

Barbara Tversky: Right. Any learning method depends on that. How much are you putting into it to learn it? And you're going to put different things in depending on how you're going to be tested. How you're going to use the information? How you're going to retrieve it? So, you want your encoding to anticipate your retrieval. What information are you going to need and when? And that's a more subtle set of considerations. I’m afraid I’m exhausting people.

Frode Hegland: Quite the opposite. I have two questions. But first, I’d like to ask, we have a few new people here today, Karin and Lorenzo. Have you got any questions or comments?

Karin Hibma: I am just typing my goodbye now. This was brilliant, Barbara. Thank you so much. And thank you for the invite, Frode. I am a name or a language creator, and I’m always thinking forward. So, it really helps me to understand the antecedence of these kinds of understandings. And I love the aspect of mapping as a place locator for putting words together. And thank you. I am still absorbing. So, really brilliant.

Frode Hegland: Karin, you said you are a language creator. First of all, I obviously pronounce your name completely wrong. What is your preferred way of saying your first name?

Karin Hibma: I’m Karin Hibma. People get the Himba, and there's a tribe in Africa. But that's not me, as you can. Hibma is a region in Northern Netherlands, a lot of last names with ‘ma’s’ in then. I think probably means ‘by the ocean,’ ‘by the sea.’ But everything in the

Netherlands is. I’m responsible, with my husband who's deceased now, for naming ‘Kindle’ and ‘TiVo’ and a few other little things in the world. And I work with companies doing strategic identities. So, a lot of times we're either creating names for new products or helping them define their language and their story, to get from where they are, to where they want to be. Which, of course, goes with (INDISTINCT) and the wonderful concepts you've done. So, I don't have your book, but I’m certainly going to be getting it and studying it to cover the cover. And the "Babies Build Toddler's" book that I mentioned is really brilliant. It's a Montessori method, but very often, as I think Brendan said, “New parents don't really understand the math.” I mean, they're suddenly given this human being, which we don't realize is going to come to its full awareness over a period of 25 years. And really being able to have some kind of guide rails for parents to be able to actualize that, is pretty wonderful.

So, thank you.

Frode Hegland: Karin, I have to ask you with that amazing background, if you would like to consider writing a piece for The Future of Text Volume III coming out this year?

Karin Hibma: I would love to. I am the worst writer, Frode. I like to interact, but I find, sometimes, putting words down... But send me a note at karin@cronan.com.

Frode Hegland: Yeah, we met through Twitter. Thank you. We met through Twitter so we'll continue there. But what you say there's very interesting because Barbara was talking, just a few minutes ago, about writing in space. Yes, that's something really worth drawing out, because, in one sense, that's not really true, unless you're writing on sand or a huge piece of paper. Because writing, very much, is linearizing. A sentence has to be linear to have grammar. And, of course, with software, you can write a little bit here, a little bit there. But then, at some point, you have to, and I just finished my PhD thesis, and the hardest bit was not writing, that's easy, but kind of blocking it into a thing is impossible. So, I’m wondering if Barbara has any advice for all of us, including Karin, maybe in how to consider this? And by the way, Karin, for the book, don't be intimidated with how you write. Please consider looking at the previous two volumes, it's all over the place, which is a good thing. Anyway, Barbara, any thoughts on that?

Barbara Tversky: Say what that refers to again?

Frode Hegland: Yeah. What I’m referring to is, when we talk about text, there is this kind of idealized notion that you can write it down in space. But unless you're working in a free-form mind mapping software, you're not writing it in space as such, you're writing it in a line. It is one single line. It happens to wrap, but it is still a linear line. And in our community here, we are trying to do many things with that. Putting it here, putting it there. I see Bob's put his camera back on because this is, obviously, very much his field too. But from your work, and your understanding, Barbara, can you talk a little bit about, how we should be writing in

space in an ideal environment?

Barbara Tversky: There's the writing for yourself when you're working through the ideas, and that should correspond to your ideas. Then you have to put it in a linear form for other people to understand, and organize it in a way that other people can understand it. If you want to communicate directly, like give directions for getting from my house to your house, or understanding how molecular bonding works and thereon. And there, one of the principles of InfoViz of giving a context, and then the details do go for text. And we found that a little bit in some of those experiments, where we go back and forth between a depiction and a description, that you want to give an overview, and then, fill it in in some systematic way.

And the systematic way should be conformed to somebody else's conception to make it clear. But that's for writing clear prose. If you want to do poetry or art in drawings, then you're free to go all over the place. And that ambiguity and openness allow many interpretations. And the ambiguity is what makes it beautiful. It's what makes you come back to it, and come back to it. Because you see new things in the same painting or the same poem. Because you're bringing things from you back into it and that's a bit of the interactivity that people like and talk about in music, in art, even walking the city, you're seeing new things, because you can't completely structure it. And that adds. But if you really want people to grasp scientific, or historical, or arguments in law, then you have to be more systematic in getting in a way that people will understand it. And creating a context, and then relating the details back to the context it's a general principle that goes for good writing and good diagrams at the same time. So, does that get it your question a little bit?

Frode Hegland: It really does, despite being distracted by Edgar, who just came here. Do you want to say hi?

Edgar Hegland: Hi.

Frode Hegland: So, Edgar is four and a half, and he's learning reading and writing in school. And to watch that process is endlessly fascinating. It's exciting.

Barbara Tversky: Yeah. Endlessly fascinating. When you think about it, reading is a cultural artifact. Cultural inventive. And one interesting fact in the brain and letters is, many letters, say in English, a small "B" and a small "D" are distinguished by their mirror images. And the visual cortex for recognizing figures, objects, whatever object like things, has many different parts to it that do slightly different computations. There's only one tiny area that is receptive to mirror images. Otherwise, the visual cortex ignores mirror images. So, flipping faces doesn't matter, same person. And for many objects, that's true. Letters depend on which way they're facing. And every culture, even cultures that read idiographic languages, like Chinese, and Japanese, use that same area of the brain to read. The one that distinguishes mirror

images. And on branding which, Karin talked about earlier, we have icons. Do you want them symmetric? Not symmetric? I mean, they become extremely recognizable. Fonts become extremely recognizable. Letters are harder to discriminate. But, as anyone learning a new script knows, they can be hard to discriminate. But ideographic letters, faces were graded at millions of them. Millions may be an exaggeration, but thousands, certainly.

Lorenzo Bianchi: My question has been partially answered. It was about writing in space. Because, it occurred to me, when I was learning Mandarin, so Chinese characters, what happened to me is that, even if I was using an App like "Skritter," where you can actually trace the character with your fingers, I noticed that the movement, the range of motion wasn't ample enough. So, I started experimenting and I noticed that, if I increased the range of motion if I started to use my whole body, instead to trace the person, the character of a person, I started to do something like that. It was incredibly more effective. But just for me. I don't have any more data about that. So it was that curiosity. Because I’m a student of cognitive linguistics. I have an interest in body cognition. And I noticed that. And instead of reading and writing the characters, I was just actually living the characters with my whole body. It was incredibly more effective.

Barbara Tversky: Very interesting. And you know, the great calligraphers use their whole body. And it's the motions and not what they see. It's really the motions they practice, like the piano. And they are large motions. I don't know quite what would happen to them, or anyone, when they get to be small hand motions instead of the whole shoulder and upper body. And it would be interesting to look at that. And if you ever get to Xi’an, which I highly recommend, there's a calligraphy museum that has blocks of granite with calligraphy, mostly ancient. And they are just stunning. Stunningly beautiful. Without knowing you or someone that knows the characters, they will appreciate it much more. And from my understanding, people who look at calligraphy make the body motions. Miniatures of them, this is the mirroring. The mirror motor idea. So, when they see the calligraphy, there are feeling in their bodies, the motions that it would take to make them. And then your pleasure is enhanced. The same thing happens with dancers. When ballet dancers watch ballet, their motor cortex is more alive than when they're watching capoeira. And the opposite happens to capoeira dancers. But when you know the motions well, your motor cortex is activated just from the visual motion. There's more to say on that, and there's a bit in my book on recognizing. If there's time I can tell that story about the point life. But I see there's, at least, one more question.

This is a former Stanford student who did a rather brilliant work, Maggie Shafar. There was a technique that was invented by a Swede, Johansson, in the 70s, of dressing people in black, and putting lights on their joints. So then, when you take videos of the people, all you

see are the joints moving. And if you look at a static display, you can find this on the web, on "YouTube," point light. And if you look at static people you can't even recognize that it's a person. But once the person starts moving, you can see if it's a male or a female. You can see if they're happy or sad. You can see if they're old or young. You can tell that from the body motion, from the pattern of lights. It only works for upright, upside down doesn't work.

Although I bet for gymnasts it would. I don't know. But what Maggie did was take pairs of friends, have them come into the lab, and just walk, dance, run, play ping pong, all sorts of motions that they would do with the point light. And she had several pairs of friends. And then, three months later had them come back into the lab. And look at the point light and identify them as, "Are they my friend? A stranger? Or me?" So, they could identify friends better than chance. But what was most surprising is they could recognize themselves better than friends. Now, they've never seen themselves do these motions. Unless you're a dancer, or a gymnast, or a tennis player you don't watch yourself doing these motions. So, they've never seen themselves dancing, playing ping pong, and so forth. Yet, they could recognize themselves better than their friends whom they had seen doing these things. So, the explanation is that, watching it activates your motor system, and it feels right. It's like trying on clothes, they fit me. So, you're watching that dancing movement, or the ping pong movement, and it's more effective for the more vigorous movements, than just the simple ones like walking, that you recognize yourself. Your body is resonating to what you're seeing. And when it resonates to you it says, "Yeah, me!" So, that I think is fascinating. How much the human motor system or mirror motor system acts to understand the motion of others. And we've taken those ideas into understanding action, static pictures, and so forth, so we've taken those ideas further. But the basic phenomenon, I think, is fascinating. My guess is, with calligraphers would be a similar thing. They could see their own calligraphy. But as far as I know, no one's done that.

Frode Hegland: Edgar just wanted to show he has a real bus ticket. He thought it was worth showing to the community today. Thank you. But I have to ask you, just really quickly. Who here has seen the movie "Hero?" The Chinese movie "Hero" with Jet Li? Oh, a good couple of hands. If you haven't seen it, you have to see it. Randomly it was playing in Soho when it came out, many years ago. I was there with Ted Nelson and my brother said, "We have to see this." We sat in the front row. Literally, after two minutes in the intro, they both went to the side and said, "Thank you." It is basically about, I love "Hamilton" because it's about American being written into existence, "Hero" is about China being written into existence.

That's the worst summary you could ever imagine. It's the most beautiful movie. If you haven't seen it, please do. Brandel?

Brandel Zachernuk: Thank you. So, the question is a little all over the place, but I’m really curious what will you do with it. So, first of all, it occurs to me that, I’m not sure whether it's psychologically this is the case, but that there are sort of two motor systems in the sense of there being a gross motor system, and a fine motor system. Certainly, the way that I seem to sort of marshal my actions reflects that. So, I’m curious as to whether you have research on whether, the points of light sort of study is clearly about the gross motor system, people being able to understand the movement of large-scale kind of limb alternation I’d be curious whether that...

Frode Hegland: Is he frozen? Or is he just playing with us?

Barbara Tversky: I know. I think he's frozen. He's somewhere in the cyber space.

Frode Hegland: At least he's frozen at a very engaged moment.

Barbara Tversky: Yeah, right. But I can answer the questions, sort of, anyway. And that is, I think people when they see handwriting, imagine how it would be written. At some point, many years ago, I needed to forge my husband's signature on many documents. He was out of the country, and I needed to forge his signature. And I sort of went through the motor movements that it would take to make his signature. And he couldn't tell the difference between mine and his. So, I don't know of research that's directly looked at fine motor. But my guess is that the same phenomenon would happen. I do know that when, this is again, years ago, more than 20, a friend was working on a pen whose writing could be recognized by a computer. And for English, at least, there were 13 strokes that underlay script writing in English. And with those 13 strokes, they could read handwriting, and you could pick it up with a pen by where people stopped and started. So even processes that we think of as continuous are often truncated. So, my guess is that... So, we missed you, Brandel. You froze at some point. But maybe you heard. Maybe I anticipated your question and answered it?

Brandel Zachernuk: Well, I’ll have to go back and watch the "YouTube." But I look forward to doing so. The next part of the question that I can't imagine you got to was, in linguistics, and in information theory, we have this concept of Levenshtein distance. The number of permutations that it requires to move from one word to another word. And to me, it occurs that the number of points of difference within a word are the things that make it differentiable and distinguishable from another word. The more different something is, the lower the amount of information required to distinguish it. In terms of action, what are your thoughts on the way that different motions are distinguishable and differentiable in terms of their cognitive impact? I’m thinking that when we use computers, it's all the same stuff. You were just using a mouse and a keyboard in exactly the same way. So, browsing "Facebook" is

the same as writing a thesis. At least in so far as the forms of the inputs. Do you see it as possible or beneficial to draw some of those activities apart from a physical perspective? Even if it results in individual input modalities being less optimal insofar as they then have the capacity to be cognitively separated?

Barbara Tversky: That's again going to be a complicated answer, I think. And even your question about language, is that hearing or reading? The distinctions that you have to keep in mind. Because my hunch is, they might not be the same. And the Roman alphabet, with some variations, is used all over. And that's visual discriminability. Fonts vary. Handwriting varies in what's distinctive and what isn't. What's important to one language as distinguishable might not be important to another. Hearing would be something else. And their expertise is going to matter. And redundancy. One thing Tufte always recommends, he has contradictory recommendations, but he likes to eliminate chart junk. But ultimately doing that, eliminates redundancy. And we need redundancy to understand. Because we're going to be missing things. And have redundancy is an error correction in part. On the visual side, similarly, what I need to watch a football game is minimal. What other people need to watch it is, again, going to be varied on the motor side. And same with dance, or music. I go to the opera a lot, and I love it. But my sophistication is at a kindergarten level. There are things I like and don't like. And I rely on critics to tell me what to watch, what to attend to, to distinguish one singer's... So, a lot of that is going to depend on my expertise. How much I can distinguish? A radiologist, we talked about that earlier, they're going to see things in clouds or in points on an image that the rest of us won't be seeing. And you need a lot of training to see. So, I don't know if that completely addresses your question, but.

Brandel Zachernuk: I think it's excellent context, thank you.

Aaron Sloman: Well, since you asked. This conversation has reminded me of a strange experience I had many years ago. I always liked music, and at one point, I did play the piano, and not very well, then I learned to play the flute somewhat better. And then, I started trying to play the string quartets with friends, using a flute to play the violin. Which didn't work very well, but I then, thought I should learn to play the violin. And I really struggled. And I remember on one occasion when I was trying to get the kind of tone quality that I knew, my wife could get out of the violin, I couldn't do it at all. I put it down and I started watching a television program, in which, the Israeli violinist Itzhak Perlman was playing something, and I felt as if something had changed in me. It was a very peculiar experience. And the next time I picked up my violin I could do vibrato. And I’ve never heard anybody else reporting a similar experience. And I have no idea whether any neuroscientist has any idea how that works. But it seems to be relevant to what you've just been talking about.

Barbara Tversky: Yeah. And I’ve had that experience as well as a small child. I skated a lot without any lessons at all, and watched people twirl, and couldn't do it, and couldn't do it.

And then I learned what you need to do, and it was a state change of competence. And I agree that sort of thing happens. And a good coach will often use metaphors to get you to do that.

Telling you, for a tennis serve, how to hold the racket and how to swing. You have to have a metaphor for it. And the right coach, or right music teacher, or even the right artist, the art teacher will give you the right metaphors to set you up to do the set of actions properly. And again, it is that cycle of listening, and doing, and listening, and doing that I talked about earlier with the artist. That is a conversation of the eye, and the hand, and the page. So, for music, it would be your ears and your hands. And that cycle. And then, you could have, all of a sudden, this insight that you often can't articulate. That changes the whole frame of reference.

Aaron Sloman: I felt it was not my eyes and hand, but some deep ancient part of my brain that I hadn't been using, suddenly got turned on by watching paramount in a way that I don't think anything else could have changed me, not in that space of time. It was a matter of just seconds and then I felt different, and the next time I picked up the violin, I knew I was different.

Barbara Tversky: Well, presumably you saw his arms hands bowing, or?

Aaron Sloman: Yes, I saw something. It was very abstract. I mean I could try to imitate the hands and I wouldn't be able to do that. But there was something else about both, what he was doing, and also the sounds that were coming out, which together, drove something in me. But I may just have misremembered, or misdescribed, and I’ve never had any other experience like it.

Barbara Tversky: You know what I have, and some of how you learn a new language, and how to pronounce words, "R's" are always a problem in different languages and all of a sudden getting the insight in how to make that sound that you've been hearing. And I’m not an adept linguist at all, but there, when I go to a country where, at least once I knew the language, I just listen to it. I’ll turn on the radio and just listen to the sounds and that helps me go back to that way, "maybe I can do it," to make it sound that way. And there I think some of it is the motor resonance. From the seeing or the hearing, it transforms into motions of your body, in one way or another. But you're absolutely right. It needs to be studied. It really needs to be studied. Yeah.

Aaron Sloman: And it has to make a permanent change in the brain. What that change is? I don't know.

Barbara Tversky: Yeah, I wonder if you go back to the violin. I go back to try gymnastics. That was effortless when I was a kid. The muscles aren't strong anymore. The joints don't

work. Better not.

Aaron Sloman: Semi-permanent, I should have said.

Frode Hegland: So, Aaron. I just did the thing of looking you up on "Wikipedia." So, obviously from your voice, it's easy to tell that you're from the same island where we're sitting. I’m in Wimbledon. And I’m wondering, first of all, how you came across our presentation today, our meeting? And also, if you might have perspectives around the notion of The Future of Text, which is tangentially and deeply what Barbara has been talking about today?

Aaron Sloman: I’m in Birmingham, in the United Kingdom. I was born in Southern Africa, in a little town called Kwekwe, in what was then Southern Rhodesia. And then I had a lot of my education in Cape Town, because my parents were misinformed by a teacher. They persuaded my parents that I’d get a better education in South Africa than I would in Rhodesia. I later discovered, when I had fellow students who'd done their A levels in Rhodesia, that they knew all sorts of things and had competencies that I didn't. So, it was a struggle to catch up with them. But anyway. So, I had a collection of different backgrounds. I came to the UK in 1957. I was going to do mathematics, but I had got interested in philosophy, and then I discovered that most philosophers said things about mathematics that I thought was wrong. I thought wrong and I read that Immanuel Kant said something that I thought was right. So, I switched to philosophy to defend Kant. And I’m still trying to defend what Kant was saying in 1781 or thereabouts about the nature of mathematical discovery, which has to do with being able to see possibilities and impossibilities in structures and processes. Which is totally different from what's currently going on in AI systems with neural nets. Where they collect lots of statistics, and then, derive probabilities. And you can never get an impossibility out of that. You can just get more probabilities. So, you're asking me to say something about where I’m coming from, and what I’m doing, and that gives you some of a feel for it. And I now feel that there's a whole lot going on in different disciplines, in various branches of biochemistry, microbiology, and developmental biology, which I’m trying to put together in my head in a way that will enable me to explain, first of all, how something in an egg can produce a bird that has all sorts of competences that it hasn't learned? Like they can go and pick for food and then paddle in the water and other things.

But not only birds but there are also all kinds of things that go on in eggs of different sorts, which produce different sorts of competencies. So I’m trying to see if I can assemble enough information from different sources to explain how that works. Because, at the moment, I don't think anybody knows it. I don't think anybody understands it. I don't think I will be able to explain it. But I might inspire some of the very bright younger people, who are working in different sub-fields, to talk to each other, and come up with the new senses as they'll answer

my questions. That's what I’m hoping for. Sorry, that goes a long way. Well, it's partly related to this because I thought there might be something relevant in this. But I couldn't get here in time. But at the end, I think, what you were talking about is relevant.

Frode Hegland: Yeah. So, thank you, Aaron, very happy to have you here. So, this talk will, of course, go up on "YouTube," depending on my Wimbledon internet access speed. And we will also have a fellow do the transcript. A human, who is very good. He'll make sure he gets our names and all that good stuff. Barbara, do I also have your permission to do screenshots of your slides interspersed in the transcripts?

Barbara Tversky: Yeah, it's okay. My caveat is, I’ve been swiping slides from all kinds of sources for 25 years and I no longer know even where I’ve swiped them from. And I worry about that. I obviously don't have copyright. And my understanding is, it's okay to post things that have no copyright. But I’m not absolutely sure. So, that's my only concern. And that said, there are plenty of "YouTube" recordings of my slides in different situations.

Frode Hegland: Yeah, no. That sounds fine. And that's an interesting question. I mean, the journal we publish is non-profit, and all of that good stuff, or completely open access. So, if someone has a problem with it, that's not a problem. We take it back. So, thank you for that.

Barbara Tversky: Yeah, I know. When I wrote the book, I had about four times more images than my publishers would let me use. So many I got Wiki creative comments. But even then, there were doubts and so forth. And I was dismayed when the Metropolitan and other museums released all their images without any demand to copyright, only a tribute or no payments. And that was too late for me because I wanted to use, instead of quotes, I wanted a depiction at each chapter. I’m glad to see, at least, some places are releasing copyright.

Frode Hegland: That's very good. I’m just going to post them in the chat here as we wind down. futuretextlab.info, that's where we will be putting all this data. And this is where we carry on our dialogue. Now that it's been 2 hours and 20 minutes, which is quite poetic in terms of numbers, I’d just like to say, thank you, Barbara. Thank you, everyone, who was still here. Thank you, everyone, who was here earlier. And thank you, everyone, who will be listening in the future. And I hope we can continue the discussion. You're all invited to our general weekly meetings, as well as of course, our forthcoming special monthly sessions.

Which I hope will be even a sliver, as good as today, in order to be successful. So, thanks very much and have a wonderful weekend everyone.

Barbara Tversky: And thank you for your excellent questions and thoughts, it was a pleasure.

Frode Hegland: Yeah, it was a wonderful group. All right, take good care. Bye.

Bjørn Borud

Time, speed and distance

…or “why we’re going to have to talk to each other and not bet on aliens for interesting conversations.

A few weeks ago I had a conversation with someone who was convinced that within our lifetime we will speak to aliens. I pointed out that while I certainly wish that he is right, if you start to do some napkin math the numbers tend to suggest that this is never going to happen. The likelihood is so close to zero that, for all practical purposes, you can assume it is zero.

I was reminded of this conversation when Frode sent me a video showing what the speed of light looks like at the surface of the earth. A video of one circumnavigation of the globe at light speed.

https://youtu.be/1BTxxJr8awQ

To our senses, the globe is huge. Even just travelling from Europe to Asia or to the US drives this point home. You are hurled around the globe in a winged tube at speeds that are not that far from supersonic - and still it takes forever to get anywhere. Amsterdam to Tokyo takes about 13 hours. Amsterdam to New York is almost 9 hours.

At the speed of light you can circumnavigate the equator 7.5 times in one second. To our intuition of the physical world the speed of light is immense.

Computers and light speed

We are confronted with the fact that the speed of light isn’t particularly fast in our everyday life through computers. The most useful time-scale, if you are working with computers, is nanoseconds. For instance an integer division on an Apple M1 CPU is about 0.624 nanoseconds. The piece of code I work on right now can, according to my benchmarks, do one unit of work in about 166ns.

During one nanosecond, light travels about 0.3 meters (in vacuum). Or roughly

one foot. Which means that by the time my program has executed that one unit of the operation that I was measuring, light won’t even make it across the street to my neighbor. Imagine how much work my computer gets done in the time it would take light to travel from here in Trondheim, to New York, and back again.

Jeff Dean at Google used to maintain a list of “numbers every engineer should know”. This list tells you roughly what timescale things happen at. There is a website that not only shows these numbers in relation to each other, but also shows how these numbers have changed over the last 27 years.

https://colin-scott.github.io/personal_website/research/interactive_latency.html

Notice the how intercontinental packet roundtrip times have been almost constant over time. In cases that are dominated by distance, physics dictate the limits.

To be fair, there are things we can do about intercontinental packet travel. It turns out that the speed of light in a fiber optic cable isn’t c (the speed of light in vacuum), but about 2/3 c. With satelites in Low Earth Orbit using laser interconnect in mostly vacuum, we can probably get the time to traverse the globe down a bit. But there is a hard stop at c. If we’re going to communicate faster we need things that only exist in somewhat exotic physics. And even then it would be “fiddly” to put it carefully.

There is a video that shows the speed of light when travelling from the sun and passing the planets of our solar system. This really drives home the scale of our solar system. https://youtu.be/2BmXK1eRo0Q

It takes about 8 minutes and 20 seconds before we pass earth. At around 43 minutes we pass Jupiter, and as the video ends at 44 minutes and a bit it is still over half an hour until we pass Saturn.

Voyager 1 has just managed to back out of our driveway. It is at present roughly 22 light hours away from earth. Which gives us the opportunity to talk about another limiting factor.

Signal strength and distance

Communicating over distances with the kinds of technologies we use usually implies using some form of electromagnetic radiation. From radio waves, through the visual spectrum to higher frequencies such as gamma radiation.

The signal strength of an electromagnetic carrier decreases by the square of the distance

between sender and receiver. So when you move 4 kilometers from your house, the signal strength is roughly proportional to 1/16 of the original signal strength.

Remember Voyager 1, the little spacecraft that could and which has now managed to make it down our driveway and past the heliopause at the edge of our solar system? Voyager 1 has a radio that transmits at about 23 watts of power. By the time its radio signal reaches us, there isn’t much signal strength left. The signal is on the order of one attowatt - or 10^-18 watts due to the distance it has to travel.

A mosquito buzzing in front of your face at a Rammstein concert is going to be very loud compared to the signal we get from Voyager 1. So in terms of our senses, this is very hard to fathom. Voyager 1 is a very faint whisper in the universe - set to a background of a lot of local noise.

On wikipedia there is a page called “List of nearest terrestrial exoplanet candidates” with distances given in light years: https://en.wikipedia.org/wiki/ List_of_nearest_terrestrial_exoplanet_candidates

We know that we’re capable of picking up a signal that is on the order of an attowatt.

We know this because we have received signals from Voyager 1. We can probably detect weaker signals, but this becomes tricky.

The Drake equation

The second to last piece of the picture that really drives home the reality that while we probably aren’t alone in the universe, we will probably never speak to anyone else is the Drake Equation.

The Drake Equation is described as “[…] a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way Galaxy”. It lists a bunch of factors which it then multiplies together to arrive at an estimate. The problem is that even the intervals of these factors span vast value ranges. Have a look at the Wikipedia page for the equation to get an idea: https://en.wikipedia.org/wiki/Drake_equation

Note that it only talks about our own Galaxy. The Hubble space telescope revealed about 5500 galaxies over an area that took up just one 32 millionth of the sky. Today’s estimates suggest there are about two trillion galaxies in the observable universe.

But of course, the distances from “here” to “there” are so great that they aren’t even relevant candidates.

Our civilization

Homo sapiens sapiens hasn’t been around for all that long. About 160.000 years. As hominids go, we haven’t been around for all that long. The fossil record for Homo Erectus suggests she was minding her own business for around 1.5 million years before disappearing.

We have about another 1.3 million years before we make a dent in that record - give or

take.

On the other hand, we have figured out multiple ways of not only causing our own

extinction, but taking everything else with us in the fall. So there’s that.

So where does this leave us? Well, we’re not going to be talking to aliens. We might at some point hear squaks somewhere in the electromagnetic spectrum that could be indicative of intelligent life, but by the time we discover it and get around to responding, it is unlikely the’ll even be there anymore.

And we certainly aren’t going to pay them a visit unless we figure out a way to download our consciousness and somehow transmit it somewhere else – which is dubious at best. Perhaps we can create some artificial representation of ourselves.

We don’t have to get into the physics of transporting a useful amount of mass a useful distance across the universe to say hello, but let’s just take it as read that the numbers aren’t with us on that. We’re thoroughly stuck here.

And in all likelihood, long before talking to aliens may even becomes a real opportunity, we’re likely to wipe ourselves out. Which means the only interesting conversations we’re going to have are right here. On this pale blue dot. In whatever brief moments we have left before someone pushes the wrong button.

Bob Horn

Information Murals for Virtual Reality

I have been helping International task forces address with big challenges facing us today (e.g. climate change, sustainability, etc.) by creating large 5 x12 information murals. Some of these murals have been ported into virtual reality as examples of the complexity VR might be able to help us think better. The text used on these info-murals appears in small chunks that present interesting syntax-semantics problems for us creators and synthesizers. When we can solve them, we may be able then to address other difficult issues such as how to manage context, how to better portray process diagrammatically, and how to improve our scaffoldings for thinking.

Introduction: my recent work

For the past 20 some years I have been helping International task forces address some of the biggest challenges humanity faces today including global climate change sustainability, energy and resources, various aspects of the nuclear situation. Weapons and waste disposal good management.

My role as synthesizer

My role has been that of a synthesizer, integrating the deep analysis and considered recommendations – wall size information displays that contain hundreds of textual chunks and hundreds of visual elements, icons , images and diagrammatic shapes.

Examples of Information Murals

Here is what some of my information murals look like:

image

Figure 15. Mural 1. Horn, 2022.

image

Figure 16. Mural 2. Horn, 2022.

image

Figure 17. Mural 3. Horn, 2022.

image

Figure 18. Mural 4. Horn, 2022.

Overwhelmed by complexity?

I know that some of you will feel overwhelmed by the amount of information contained in information mural. That has to do with your expectations (I imagine) as to how fast you should be able to grasp what is on one of these murals. Rather it would be best to consider stepping back and looking for the big picture and then walking up to them and looking at individual bits of detail and how are they related. Understanding a whole mural like one of these is like reading a 50 page report. Some of your fast readers and may read them in 10 to 15 minutes. Others will take 30 minutes or longer.

Why am I here at this Symposium?

The question: What am I doing here at a conference about the future of text that is mostly focused on virtual-reality?

The answer: Information murals: I got into this work of making information murals with the help of a British diplomat who saw my work and said “This will replace all those stacks of reports that sit it all the bookshelves In the foreign office which no one ever reads. You must come to the Foreign Office and show them what you do. “ He arranged it. And my first big public work was with the British Foreign Office explaining their policies on climate change to 180 offices around the world. That was in the early 2000s.

We then went on to work for four British government ministries to investigate on climate change policy.

Text as idea chunks with subheads

Yes, information murals are visual. But you will see that there is lots of text on them. You will see that all of the text on information murals is displayed in small idea chunks that are related by space, color, shape, size, and diagrammatic elements.

One of the major reformulations of text for complex subject matters will be to divide much of it into such small idea chunks. You can call them paragraphs if you like, or concept blocks, or boxes, or snippets or anything else.

The small idea chunks on info-murals consist of one to (roughly) 7 to 10 sentences or often in tight diagrammatic format, and sometimes in table, chart or graph structures.

One of the next major tasks in the future of text is to learn how to manage, arrange, sequence, and display small idea chunks with informative subheads.

Benefits of small idea chunks with subheads

I believe these small idea chunks will eventually replace the long endless scrolls a writing that appear in academic papers and many reports in science and commerce. They will save us all immense amount s of time by enabling quick scanning and skipping of what we already know. They will help us re-use many idea chunks more easily repositioning them in different info-murals.

Why am I here at this conference? – second answer

My second answer is the number of the speakers at this conference who are much more qualified to talk about virtual-reality and to make advances in it saw some of my information murals in a small workshop that Frode leads.

These VR-makers immediately – that is overnight – enthusiastically put one of my information murals into virtual reality. And in the workshop team began an intense investigation how the information murals may help us to think better about our major human problems using virtual reality. One of the big puzzles was and is: “What is the unit or element of an information map that we should attach meta data to?”

Using info-mural in VR is been very encouraging to me. I have offered to help them in any way I can because we have very large problems in front of us as a civilization and as humanity. And we may be able to make some advances on them in VR.

Transition to other offerings

Okay that’s what I am here. For the rest of the time that I have on this platform I want to identify a few of the things that we have begun to discuss about info-mural in VR.

Assumption: improve human thinking

First I repeat an assumption that most of us are making. We believe that we must improve our thinking methods. We must improve are thinking together in teams and groups and communities of different sizes. Einstein is often quoted as saying… “We cannot solve our problems with the same thinking we used when we created them.” l

What can we do to move toward Einstein’s goal?

There are some aspects of information mural reasoning that can help us. Here are three ways we need to get started on.

Problem: Show and link context

One of the difficult problems is how to represent and to link important context to the thinking that we are doing and trying to communicate this context to others. There is great possibility for helping many kinds of creative thinkers in virtual reality to do this context-representation

and linking work.

Show and link context…in Multiple Dimensions

image

Figure 19. Mural 5. Horn, 2022.

Problem: Show process visually

Generally the best way to show history or future scenarios is to use some form of diagrammatic information murals. In the previous volume two a future of the text, I outlined a one million diagram project. I’m looking for young leaders and contributors to such a project. The diagramming software I have seen is not good enough for such a project. We

need a next level of development in this domain.

Problem: build solid and supportive “scaffoldings for thinking”

Different kinds of social messes and problems that we face require multiple structured ways to represent the various points of view. We have to figure out the semantic and technical structuring of this scaffolding. Many of these may eventually be much more effective in virtual reality.

Offer of help

These are only some of the tasks ahead of us. There are a great many challenges ahead for our species. Some of the work by people in this conference will be important. If I can help any of your get started or continue working on these issues, please get in touch. Thank you.

Bibliography/Further Reading

Horn, R.E. (2021) Diagrams, Meta-Diagrams and Mega-Diagrams: One Million Next Steps in Thought-Improvement, The Future of Text, Vol. 2

Horn, R.E. (2021) Art + Science + Policy: Info-Murals Help Make Sense of Wicked Problems, Cadmus, 4-5 Nov. 2021

Horn, R.E. (2020) Explanation, Reporting, Argumentation, Visual-Verbal and The Future of Text, The Future of Text Vol.1

Horn, R.E. (2016) The Little Book of Wicked Problems and Social Messes (currently in draft form and downloadable from: https://www.bobhorn.us/assets/

wicked_prob_book bob_horn-v.8.1.pdf

Horn. R.E. (1998) Visual Language: Global Communication for the 21st Century, MacroVU, Inc. Bainbridge Island WA,

Horn, R. E. (1989) Mapping Hypertext: Analysis, Linkage, and Display of Knowledge for the Next Generation of On-Line Text and Graphics, The Lexington Institute, (Japanese translation published by Nikkei Business Publications, 1992).

Bob Stein

Journal Guest Presentation : 4 July 2022

https://youtu.be/aWK39a7a6Gs

Bob Stein: So what I'm going to show you is Brewster Kahle asked me to sort of think about how the archive could be more useful and I got him to hire one of my colleagues from the Institute for the Future of the book, Dan Wiesel. And ee chatted for a long time and started exploring and we ended up someplace that I wasn't expecting, which was that after 40 years of elaborating linear texts, I think we have finally figured out a way. At least we're hinting at what comes next in terms of how people are organizing content and presentations.

Bob Stein: Whenever I have a new tool, I put Vannevar Bush's as we may think into it. My colleague was a literature major, and he fell in love with Emily Dickinson. And he always starts with Emily with a favorite poem by Emily Dickinson. And so these are eight versions of the exact same poem in the Internet Archive. And these are all operating book reader windows from the archive. And you can zoom in and they all work. And this is going to be fast. I mean, it's been running through a bunch of these quickly. This is Dan's wife.

Recorded Kim Beeman: “Hi, I'm Kim Beeman. And I'm going to talk about a few of my favorite cookbooks today.”

Bob Stein: That's introduction she makes. She is a librarian. If I click on one of the cookbooks down here below, it opens up. There's another introduction by her. Dan put this background image and these are two versions of the of the cookbook that he found in the archive. Here we're just showing that. Let me see if I can get with this. Here, we're just showing that we can sync up an audio or video with an object. So I'm going to play this. And when she gets through a short introduction, the focus is going to shift to the First Amendment and then it would shift to the others.

Computer Voice: “The United States Bill of Rights. The ten original Amendments to the Constitution of the United States read for LibriVox dot org. By Andrea Fiore December 27, 2007. One.”

Bob Stein: We were trying to do a demo where we and we were look, I was looking for that famous image on of the first four nodes of the Internet, and I couldn't find it at the Internet Archive. I'm sure it's there somewhere, but their search is so terrible. But by complete accident, found this talk that Alan Kay gave in 1995 at a symposium event. You may have been there, and it's really quite a remarkable presentation of the history of computing in the sixties. And I was so excited because Alan made it very clear that the ideological basis of what was happening in the sixties was quite different than what emerged by the by the mid seventies with Microsoft and Bill Gates. And I really wanted just everybody under 50 who's working in inventing our digital future. I wanted them to watch this film, but I realized there was no way I was going to get anybody under 50 to watch a film by somebody that they had never heard of. So breaking it up into chapters and it just there was nothing out there that did what we wanted it to do. And so these are just three very short bits at the beginning. If I talk, click on the Engelbart section you get I'm sorry. I'm on a slow connection in a hotel in Birmingham, but you get Doug Engelbart's Wikipedia page, you get the mother of all demos video.

You get the mother of all demos Wikipedia page, and you get the brilliant, which I'm sure all of you have seen Ted Nelson's brilliant eulogy for Engelbart.And then back to spatial data management. Voyager published this fantastic video disc that the Architecture Machine Group now the Media Lab made, and these were the liner notes for the video disk. But it's all of the early sort of greatest hit demos from the architecture machine group. And these are these four were sort of four of my favorites. This is the Aspen Movie Map. And if you'll recall, there's a point at which you can stop your the joystick, turn to the left and go into a building and explore it. Well, several weeks ago, Google showed their immersive map system and it only took them 40 years. But now they're showing people going inside of a restaurant and exploring it. And I just thought it was sort of perfect to be able to add that to the tapestry, because one of the things about tapestries, I think it's important is that the dividing line between a reader and a writer is as thin as we can possibly make it. So it's very easy for a reader of a tapestry to fork it. And as I did here, I added this video from the Google presentation. This is really an art exhibit. In 2000, we put out a tool called Tc3, which was our attempt at the time to get as close to HyperCard as we could. And we gave it to an artist who made these remarkable books that they don't run anymore, of course. But I had videos that I had made of people working through them. And so this is just a bunch of these videos and. I it just it plays but it as a curatorial tool to make this presentation and work perfectly. (Get this out of the way. Yeah. Yeah.)

Vint Cerf: You said. Of course it doesn't run any more. Would you tell me what's missing? Is

it an operating system and Apple?

Bob Stein: I mean, it runs on Windows, actually perfectly. It doesn't run on the Mac anymore.

Vint Cerf: Got it.

Bob Stein: I mean I mean, almost everything that we did in the eighties and nineties, I mean, not almost without without exception on the Macintosh, nothing runs anymore. And almost without exception, everything does run on windows.

Vint Cerf: Wow. That's actually quite an impressive observation.

Bob Stein: It's an amazing thing that that they have kept this stuff going in windows.

Bob Stein: So this is interesting. These eight windows are different for different hours of the day from a particular television station in Russia. And we wanted to show what Russians were seeing during the Ukraine war on their home televisions. And these are all so, you know, I can zoom in on these and they all play.

Computer Voice: Actually. Really, I'm not in Ukraine. Um, but now.

Bob Stein: What's interesting is that fast forward a little bit and this is the Internet Archive just released this. It's a visual explorer. These seven windows are seven different television stations in Russia. And these are thumbnails captured during the entire day's presentation. And any one of them, I can just click on it.

Here's basic. Running in in a window.

Vint Cerf: Wow.

Bob Stein: A programming book for kids on how to program in basic. And I was thinking, wow, wouldn't it be fantastic for a teacher to be able to give high school students the assignment of I want you to see what what computing what programming was like in 1980. So here's the assignment. Here's a place where you can do it and here's an instruction manual if you need it.

Kim Beeman: Now, this is simply a Wayback Machine page. Got to get this out of the way. And one of the things we've done is when we when you put a Wayback Machine page into a tapestry, it comes with a with a a scrubber at the bottom. So if I want to get a different date for this website.

Vint Cerf: Wow.

Bob Stein: It's all just here. And which is pretty wonderful.

Vint Cerf: This is startlingly fascinating. And I'm I'm assuming something that I want to verify. It looks almost as if each window in the tapestry is running as a virtual machine. So I have quite a base for different operating systems and different applications running within each each window. Is that a correct assumption?

Bob Stein: You're probably above way above my technical pay grade at the moment. What we're I mean, each one of them is is basically iframes.

Vint Cerf: Okay.

Bob Stein: So I don't I don't think there's anything conceptually about what you're saying that couldn't be true. In other words, could I be running Parallels in a window here if I want?

Maybe. I suppose I could.

Vint Cerf: Well, if we could make it work that way, if these were really VMs, then you just showed a way of hanging on to old software and old content.

Bob Stein: I think that I think that's certainly that would be a goal. I mean I mean, it's not we're not there. That's not what I'm showing you right now. But I think in terms of getting there, absolutely. That's the intent.

Vint Cerf: That would be nothing short of spectacular.

Bob Stein: Good.

Bob Stein: So this is another piece based on the book blog post. And if you'll remember, there was this point around 2005 when Jaron wrote this terrible essay about why he hated the Wikipedia. And a whole lot of us wrote in response to it.

https://web.archive.org/web/20200801071657/http://futureofthebook.org/blog/2006/06/08/ shirky_and_others_respond_to_l/

Bob Stein: And I was looking at this at this blog post, and I was realizing that all this blog posts really was was is a an annotated guide to a bunch of Web links. And I thought it would be interesting because we could do it in a tapestry of turning it inside out. And instead of just functioning, instead of featuring our annotation to a list of web links, why not just put the links themselves live into a tapestry? So you have here's Darren's original essay, and all the other essays that are referred to are all here now. We think that the tapestries are hinting at least a new media type, but in order for it to be a new media type, it has to be portable. It can't just sort of live only at the Internet Archive. So what's interesting is that if I add the word 'embed' here. It's going to take me to a page. Where... I'm going to change the width here. (1792) And then I'm going to grab this HTML and I'm going to go to. This dashboard. This is just a WordPress blog that I've got and I'm going to make a new post. And 'demo

tapestry'. Demo for future of the book or whatever. I think I got that wrong. But text, future of text.There we go.And then I'm going to put in. The custom Html. And. I'm going to go up here and preview and a new tab. And it's going to take that tapestry and it's going to embed it into. This blog post. And this is all this is all operating. And so at least showing the concept of. Of portability. And there's one more thing to show you, which is that.

Vint Cerf: So in this particular case, what has actually happened, what has been imported into the Web page that you just created?

Bob Stein: So the tapestries, as you see them, are simply a collection of URLs. I frames that so that each one of these windows. It calls a URL from the Internet Archive.

Vint Cerf: Okay. Okay. Wow. I could call it from anywhere but in this case. Exactly.

Bob Stein: Exactly. And I believe, for example, when the the tapestry that the Ted Nelson YouTube video. I don't know that that tapestry actually. I don't think we had to import that video into the Internet Archive. I think we're just grabbing for Wikipedia and YouTube both. I think we can just grab the URL. So here's the last one. One of the things that is that we're able to take a collection, which is what from the Internet archive and imported automatically into a tapestry. And this happened to be a collection of Atari magazines. And I was just playing around and I imported it. And so these are all active windows, and each one of these is a different magazine. And when I saw this, I got really excited because I realized that in some ways what was happening was that I was. Let me go back to the. Don't die on me now. Go ahead. Just go back. We said that I was. No. Sorry. I hate to screw everything up at the end.

Anyway, that that this started to feel like going back into the stacks.

What we have what we've learned with these tapestries at this point is that. Having all of these objects operating in the same visual field is way more different than we expected. That seems to reduce friction for the reader dramatically. I mean, if you think of something like this, that. Oh, it's fine. Let me go back to one of these. Yeah, something like this, where instead of having to go somewhere, every time I click on one of these things and come back like you do on the web and you. So you have to think all the time, do I want to explore? Is it worth clicking on this? How do we get everything visual at once, visible and once starts to make a very big difference that makes it makes the reader encourages the reader to explore more. And so when I saw the Atari magazines all together, I realized it started to feel like being in the stacks again, where all the books are sitting on the shelf and you just sit there and you pick them off serendipitously, one after another. And the cost of opening up another book is so low compared to what it's been on the Internet. So this is an interesting shift that we're seeing. So I'm going to step yeah.

Vint Cerf: It's been, actually I think there's something more powerful happening beyond the stacks metaphor and that's context preservation. What's what's happening in the tapestry is that it is preserving a substantial degree of context for the user. Exactly. That's a strikingly powerful notion. I've never seen it illustrated quite so with such facility. This is really fascinating. Have you published anything at this point?

Bob Stein: No.

Vint Cerf: Wow. There's one other odd coincidence. There is a company which got started about a year ago called V Tapestry. Lowercase v. Capital t. It was started by a woman who does. Montages in the course of conferences. You have somebody with a giant canvas and people are talking and they illustrate what was being said. And so she does these things one after the other. Sometimes it could be a dozen or more of these very big canvases reflecting what was discussed and with lots of symbolism. She's automated this process, and so Tapestry is a company that will take the incoming text of the discussions and generate imagery to automate the process. It's quite different from.

Frode Hegland: Bob, thank you very, very much. Really good to see this. I'm going to go back to the other window because that's my notes. You say that it's way more different than you expected. And I know that you obviously have experience with VR going way back and to different degrees, and I only became converted by Brandel in January. Before that, I'd actively stayed away from it because the future of text was a specific focus and then I decided to branch out. Now, obviously, what you're working on here would be tremendous to have wall size. Bob Horne often joins our community and he is all about murals, as you know. And one of the things that was really shocking to me is that Brandel took one of his murals, built a little, relatively speaking, Brandel a little app for it where all you can do is stand in a room.

There's nothing but the mural. A mural is really big, but you pinch to move it away from you and move it towards you. So there's no walking so that you don't get sick or anything like that and you can move it sideways. That's all you can do. It's just incredibly powerful. Because yeah, it's it's almost undescribable how powerful it is considering there's nothing there. So I can imagine what you're working on here. First of all, obviously on the wall, but if this was even a normal kind of office room, because when you talk about preserving contexts that I could imagine that you literally keep one wall for work, one for a specific project, the one in front of you for something else. Because everybody talks about this. What I'm saying is obvious. aBut what was so amazing to see today is all the aliveness that comes through it.

Bob Stein: Yeah. You know, I didn't use the phrase that I should have is that tapestries are infinite canvases, so they can go on forever. At which point you need some form of zoomable

UI. You need to be able to. You need to be able to fly around in there and zoom in on something and expand it.

Frode Hegland: Oh, that was the other thing I wanted to praise that you showed when you clicked on a thing it became ‘full screen’.

Frode Hegland: That is so important. When I worked with someone on the Chinese website for the NBA, the American NBA in China. We built a version of hyper worlds where you can click on a player's name and you get a little bit of stats and you would click on that and we'd go big. She wanted it to be semi-transparent and smaller, and now that's been arguing with her. That's when I realized that if you're looking at something, make it big, because that's what you're looking at. Make it quick to go small again. But here, you know, you didn't play it with a little bit of this and that. I was just so relaxing on the AI. Thank you, Brandel.

Brandel Zachernuk: Amazing work, really exciting. One question is, if you are browsing the same tapestry in multiple windows, is there there would be a facility for synchronizing them, more aspects of them. Is that something that you've considered in terms of either the maintenance of sort of view state or the or in order to be able to use multiple sort of nominal windows, be they real or virtual, to be able to synchronize sort of views over things?

Bob Stein: Nope. Really interesting, though. I mean, I think that we just sought to answer that partially by going back to what Frode said, which is that I think what somebody asked me, so how long does this take you to do? And I said, Well, it's either three months or 40 years. There's nothing technically very interesting yet about what we've done. Right. But it's conceptual. I mean, I was showing this to somebody the other day to Howard Besser. I don't know if, you know, he's an archivist at NYU. And Howard was Howard was saying, oh, my God, this is the stuff that we imagined 40 years ago that we would do someday. And now and what's happened is that the Internet has gotten so much more powerful that things that we could only imagine back in the pre-Internet days, but we we couldn't do once once the Internet took over in terms of electronic sort of expression, we had we had to really reduce our sense of what was possible. But now the Internet, the Web has gotten so much more so much better that we're suddenly we're able to do things that we forgot we were interested in, in a way.

Brandel Zachernuk: Yeah. Another question that I have for observation is one, obviously, in assuming using a user interface, your documents need to sort of withstand a lot of zooming. Does that does that direct and guide your sense of which which documents work? Well, you know, like you have you have YouTube videos and archive.org videos that have star frames

that are or frames that are representative at some level of them, that that can do sort of once not all content on the Internet is so well kind of entitled or predisposed to being being able to kind of zoom like that deel like there's anything that can be done to help it in terms of having having those things be different sizes or or do have you put signposting that is something other than the the documents themselves inside these type of stress to support that.

Bob Stein: Signposting. Yes. I mean, those are I mean, we are able I mean, just in terms of it goes back to to Vint's comments. I mean, let me... First of all, let me try this. One of the things that were the key thing that got us where we got to was that when we were working on the Alan Kay videos and showing and there were all these objects and ideas that we wanted to put together, Dan was reading Merlin Sheldrake book on the communication that goes on with fungi in the forest. And Dan said well suppose we actually thought for a minute about the fact that the objects in the Internet Archive are like trees. They're the nodes. But suppose the connections between the trees had had as much, or at least the important information that showing the connections between objects is actually crucially important. And that was how we really ended up with tapestries and the Brandel. Both of the things you've asked, you've you've asked we haven't thought about, you know, we're it's so early, but that's why I like showing it to the smart people, because they, you know, they start to raise questions that show us where we have to go.

Frode Hegland: Well, Bob, that's why you need to come back. We're here same time every Friday. Every Monday, except for last week when we had our projects. I see your hand, Peter, but I just wanted to do my little standing up on a soapbox for a few seconds, because you made such a really important point there, Bob, about this is what we dreamt about 40 odd years ago and why hasn't it happened? It's not just because the Internet is more powerful and computers more powerful, although, of course, that's useful. It's also because you did it. It's really important. So in both paying you a compliment and I'm really trying to highlight the fact that commercial pressures are one thing what can augment is another thing. And the reason our little group here is now 99% focused on VR is because we're going to go into the same situation. You know, I feel almost like badly fired, you know, original Mac people and all of that stuff. There's an excitement now and that's all nice. It makes me feel very youthful. But what I really, really fear is if that if there isn't a user, an academic community that is saying the stuff that you are saying, it's like these are the things we can use to augment how we work. It's only going to be How can Apple make more money? How can Facebook Sushmita make more money? And that's totally fine. There's nothing wrong with commercial development, but what you've done, as you said, technically it isn't the miracle. The miracle is that you're paid in the effort and you're not making it available. Right. So when it comes to

VR, right now, we have this beautiful oh, it's exciting and new. But in a few years, I think we're going to be where we've been for the last 30 years in Flatland. You know, there's so many things that can be done, but the market forces are so powerfully doing, you know, Macintosh pages and Microsoft Office. Where is the Bob Stein innovation going to fit in that? Right. So that is why we're fighting and that's why this future of text this year will be how can we work? And we are. Over to Peter.

Peter Wasilko: Okay. I was wondering if you'd given any thought to multi user scenarios so that you could be looking at a tapestry on your machine, but have that synched with the tapestry on my machine so we could have multiple cursors visible on the screen at the same time, and we could have mixed initiative and exploring together.

Bob Stein: That is certainly something that we imagine we will get to. I mean, what I'm showing you today is simply a proof of concept. We are we have to build this on the on the on the other hand, what used to take millions of dollars and years, we now are quite confident we can do a 1.0 version for in in months for hundreds of thousands of dollars. But it's you know, it's it's going to grow. I mean, you know, and who knows? Our version of tapestries may not be the one that grows. I think that's you know, it will happen and and it will be multi user and it will be collaborative.

Vint Cerf: It's been again just I had written down the multi user question, Peter, so thank you for for asking it. Our experience with multi user documents at Google has been very powerful and for small groups of people. If you imagine, however, that a tapestry is broadly available to tens of millions of people, you would not want to have state information for 10 million people all dicking around with the same document. So you can immediately see the need for some kind of data structures that would isolate the behaviour of a group against this background tapestry without interfering with other people who might be interacting with the tapestry. So it's an interesting challenge because the current implementation, the object contains the state information in our implementation of Google Docs.

Bob Stein: Our assumption is that tapestries there is there's the there's the understanding that if you want to be in a tapestry with somebody else, you have to give each other permission. And you're in that instance of that tapestry. And if you decide to fork it, you fork it for whoever is for yourselves and not for everybody else, obviously. I mean, we we dealt with a lot of these questions when we were doing social book where which never didn't, didn't come to market. But, but this idea of, of people reading together and annotating something together and how you could do that as a group and not screw up other people's experience. Although where we went with social book, which I think was important, was that if the if the permissions were in place, I might be reading a book with with Vin. And our annotations, if

we made them public, would be available to other people as well. They could basically click on a community tab and see everybody's comments. But anyway, the social experience of of documents, I think, I mean, Google Docs has sort of been by far the most successful example of that. But I and anything less than that isn't good enough at this point.

Frode Hegland: A hugely important underlying thing here is and I'm going to start with Web and go backwards is the infrastructure because also one of my I have two fears about the near future. One of them is we're going to run out of imagination in terms of the audience, just like Doug and other people, that amazing things in the sixties and then eighties and nineties.

Desktop PC was defined at being specific things. Imagination went out the window. That's going to happen to VR. But the other thing that I'm really concerned about is you go into VR an environment, you create an artifact, a connected artifact, you go to another environment by another vendor and you either can't open it, which would be absolutely insane. Just like a word file in the olden days, right? So what I'm saying, you are contributing here is an infrastructure for how you thread these things together. So I think that, yes, this is really nice to see on a traditional display, but I think that with real support this and of course what we're working on with visual mata a few things to allow you to go in, do amazing stuff, whether it's 2D, 3D, whatever, and then go somewhere else will be so important. I am so scared. I mean, I love and adore Brandel. I'm a mac user fanboy and I'm really scared that when Apple comes out with their headset, whatever formats they decide are the initial ones, it's going to be cemented in reality forever. We need to scream and say these are the useful and open ones. I think that's one of the reasons, it's so amazing to see what you're doing because it's not static. It's so dynamic.

Peter Wasilko: Okay. I was wondering if you'd ever seen the Chat Circle's user interface. I'm dropping it into the chat now. That was an MIT Media Lab project that dealt very nicely with dealing with groups of people interacting in the same space. And it used a. Basically a large spatial plane representing each person in a space as a circle. And you could move around and you'd be able to hear people who are within a certain radius of your location, but you'd also be able to see the circles of people further away twitching. So you could get a sense of where there were clusters of people and that overlay that on your system to provide an interface for managing the large numbers of people, potentially interacting in the same tapestry. So you can sort of think of they'd be in different phases and you'd be able to see that there are other people who are out of phase with you and bring yourself into phase with their conversations very fluidly.

Brandel Zachernuk: I'm curious so about sort of the authoring picture and and more broadly, the way in which you feel so based on the sort of the arrangement of the tapestry that you have so far. They seem like they're fairly canonical and durable in so far as you would you could point to this tapestry or that tapestry. And so that there's a rationale to have them existing as a as a as a distinct artifact that is intentionally constructed and delineated so that this is the end of the tapestry, this is what it is. And so, one, I'd be really interested in sort of the current state of authoring as you sort of have it, as you desire it and all as well, whether whether there's room to to pull on the thread, pardon the pun, of that, that continuity of it, you know, how intentionally it needs to be created versus what other options exist in that sort of space.

Bob Stein: Well, first of all, I mean, our assumption is that version 1.0 will you'll you'll simply be dragging and dropping from a from a folder of of objects onto a onto the field. I mean, right now it's clunkier than that, but it will be very simple to drag and drop and assemble it as you want. And when you, quote, publish a tapestry, it is frozen. But as I was trying to say earlier, that it's very easy for a reader to push a button and, in effect, fork the tapestry and either add things or rearrange things as as suits her and can publish or not publish, etc..

I mean, I think everybody here will understand what I mean when I'm saying this. I think that for me, I'm not a programmer. But when we had the prerelease version of HyperCard, when it was called Wild Card, and suddenly I was able to hook up a video disc player to a computer and I could start to make things that were had value without being a programmer. So HyperCard sort of became and then my, my son, who's now an engineer at Google, you know, cut his teeth on HyperCard. And so I it killed me when when jobs killed that. And Tapestry's in some ways is our attempt to go back to a time when there were tools for teachers and students to start to make things that had value and currency. I mean, it's ridiculous that we haven't had anything as good as HyperCard in all these years, and that's sort of where conceptually I'm starting from. You know, the tapestries need to have hyper talk of some sort. You need to be able to have an event statement in a tapestry. We'll get there. I but that's I think that's where we're that's where where our focus is at the moment. But it's back to your your question statement. Several people have, when I've shown them this, gone a direction that in some ways, thankfully, none of you have gone yet. Which is, So can can this can this be hooked up to AI in a way that I, I give I give it I give a subject matter and the tapestry is automatically built from. And the answer in my mind is always, Yeah, I imagine we could do that. But that's sort of not where I'm starting from.

Frode Hegland: And so Mark and I spent the last week at the hypertext conference and two

relevant things came up again and again. It's. Spatial hypertext is one. This is related to that. Why hasn't it been invested in. And also the kind of basic programming you're talking about now, if you're going to have a proper hypertext environment, you need to be able to have clever links that have a little bit of fun and have a little bit of knowledge of previous stuff. So to have like a hyper talk thing now is not going against what you said about not being a programmer, even though it could obviously sound like that. I think it's really, really crucial to enable users to be able to do some basic scripting without having to go whole hog to write this much code to initialize before they can decide what they're going to write below.

Anyway, that's just me.

Mark Anderson: I was just wondering, looking at the tapestries and seeing that. So you showed us a number of interesting sort of set ups there, and some were in a central grids and some of them had a bit more of a theme versus a narrative structure to them. Do you do you capture the. I'm trying to avoid the word link, but for the intentionality of placing this thing alongside that thing. And I say that with there's no hidden question in that. I'm just I'm just thinking of the fact that if I put, say two things together within the same tapestry, I'm doing it with some intent. And that's worth capturing at some point, both perhaps for me, for my future self, or for someone whom I wish to inform by the tapestry making.

Bob Stein: The best way to answer that, I think, is that one thing that's driven all of my workAll this time has been that when you make an authoring tool that it's important not to restrict a single pixel. In other words if I if I, if I'm really going to empower people to make things, then I have to allow them to decide what goes on, what page, where what goes into what visual field, where. Because it's a very slippery road. Once you start to restrict pixels, you end up in a in a different place.

Mark Anderson: I just I'm perhaps thinking of I see that. And I called with it. I was thinking more than just a sense of understanding that how I when I view your tapestry and understand the relationship between the first box and any other box that might be in there.

Bob Stein: I think that's up to the tapestry maker. In other words, if the connections between objects in the tapestry can be made in lots of different ways, it can be made with arrows, it can be made with contextual text, it can be made by the placement of two things next to each other. I mean, there are so many ways of doing it. And you know, hopefully when tapestries come out into the world, whoever does it, there's going to be a lot of exploration at first of people discovering new ways to put things together.

And you know, I'm pretty excited to see what my grandchildren do with tapestries. It won't be the same thing. I'm hoping it's far enough away from from linear, from the linearity of text that they will get someplace interesting. And I, you know, I, I, I do think we will be

most tapestries will be looked at in three dimensional heads, whether it's some X, some form of X or, you know, not at first. At first we're going to be on our computer screens. But that will change.

Mark Anderson: Yeah, well, that's good. It's good to hear your point about it, not just being a matter of handing it all over to I. Not that's not the iceberg thing per se, but the idea that it should be doing everything is is potentially horrifying. So thanks. I'll see to.

Bob Stein: Well. Thank you, everybody. I really appreciate the opportunity and very there's a lot of I looking forward to having the video of this so I can go back and get each one of these questions and really think them through.

Vint Cerf: This is pretty amazing. In an hour or less you managed to essentially upend a lot of people's thinking. Mine certainly just one thing which strikes me as being extraordinary about this whole design. And it harks back to the basic architecture of the World Wide Web. The entire structure that you've described is deeply dependent on reference and resolution. In the sense that tapestry is this collection of references and the fact that the references have to be resolved opens up this wonderful indirection. Because the resolution could change over time. If you had huge demand for something, maybe you turned it into a reference later because you couldn't serve up all the video from one website, all those things. This is this fact that it is there's indirectness and resolution involved in this. Then the tapestry itself is just a collection of references. In fact, it's amazingly powerful when you think about the compactness of the tapestry relative to the content that it presents.

Brandel Zachernuk: I really love... My last question. My my last question is it's hopefully a good thing to put a bow on that. So first of all, this is amazing. What's next? And then second and related is what do you want from other people, including and most importantly, perhaps us?

Bob Stein: I'm going to think of a good ask. I mean, we're I'm I'm so pleased by your collective response. I'd like to think of a really good answer to that question.

Peter Wasilko: Yes. I just wanted to come out. How much? I liked the observations about the need to embed systems so they'd be available and how it's impossible to run old Mac software. And putting on my lawyer hat. I think a big problem is everyone is afraid of the licensing issues on the core roms for old platforms and Congress could really fix this if they just pass a clear bill. It could be a one pager that simply says for purposes of fair use, if the rom of an obsolete computing system is not available, copying and reproducing that ROM

and making it available to people until such time as the current owner of the IP makes it available in a commercial form, shall be deemed fair use and just put that in the law one page bill. They can have it worked through in an afternoon and it would solve so much of this difficulty. I found wonderful Mac emulator systems, but they would require me to be able to boot my old broken Mac that I had a license to the ROM and to be able to get the data on the ROM off, which I can't do because the old machine is broken. So even though I'm legally licensed, even under the current intellectual property scheme to be able to access that or on my physically can't access it and no one is willing to share them on line because they're afraid of a lawsuit by Apple or some other mega corp coming after them. And it can be fixed very easily. Just declare fair use to reproduce ROMS of obsolete hardware.

Screenshots

image

image

image

image

Caitlin Fisher

image

image

Daveed Benjamin

Thoughts about Metadata

I applaud the Editor’s Introduction. Below are some thoughts that I had while reading the sections The Future of Us, The Future of Text and Improving not only VR Text or AI Text, but ALL Text. I present these thoughts because they add to the conversation and are part of the design requirements for the Overweb, a decentralized meta-layer that augments online, virtual, and physical realities.

  1. The creator cannot own, be responsible for, or control the metadata for their creation. We can’t rely on the creator having the knowledge, capacity, and interest to create or moderate metadata for their own work. Different metadata have different sources. Some can be automated, such as creator, title, and date. Others can be from the creator, such as the creator’s notes and tags. Some need to be the creation of the crowd and/or AI. The opposite of this is Today’s Web.


  2. Best practice abstracts metadata creation into a decentralized public space that any known persona can contribute to. While we can embed metadata in documents, we can also abstract metadata into decentralized storage that bi-directionally links to the document. This enables large amounts of metadata, including multiple perspectives

    to connect to but not weigh down the original document. This model facilitates metadata creation by others than document creators. But this presupposes a unified metadata model across documents and applications.


  3. Metadata can overlay everything (e.g., the Web, virtual worlds, the physical world) and be triggered by anything that creates an event (e.g., QR, text, imagery, 3D models, sounds).


  4. Anyone can publish (subject to verification), curate, prioritize, and filter metadata. And they should duly receive rewards for their contributions. We call this a fair value exchange.


  5. Censorship-free environments need effective metadata filtering mechanisms. People need the ability to create their own algorithms and thereby choose their own adventure.

    Personal algorithms should be tunable, transparent, adaptive, and portable. We call these smart filters.


  6. People can be pseudo-anonymous. They should benefit from their creations and activities and also be accountable for them. This suggests a unified one-account for life decentralized identity and security model. This is a non-trivial problem.


  7. If Twitter is the digital town square a la Elon Musk, it needs a digital town library for the metadata. The purpose of the digital town library would be to generate insight and knowledge that can support understanding and decision making, and cycle knowledge and information back into the town square. This would be both a Gruberian collective knowledge system and a boundary infrastructure for matters vital to the future of humanity.

Cynthia Haynes & Jan Rune Holmevik

Teleprompting Élekcriture

“Writing is a physical effort… One runs the race with the horse, that is to say, with the thinking in its production. It is not an expressed, mathematical thinking, it’s a trail of images. And after all, writing is only the scribe who comes after, and who has an interest in going as fast as possible.”

Hélène Cixous

It is 1994. You see a command-line interface. A c> prompt invites you to log in to this essay’s directory. It is now 2013. A prompt indicates your Google glasses are ready to receive input. What a difference 20 years makes? Not so much. The directory for this collection of essays is accessed through the CyberText Yearbook Database, but the thought contained therein is not unlike what will have been (in the Nietzschean mode of the future perfect) a scrolling text readable on devices like virtual reality headsets, the progenitor of today’s Google glasses.

Such devices are not so much an innovation in reading as a reading of innovation. Similarly, this collection is not so much a curated set of texts (or the preservation of conservative reading protocols) as they are texts that insist on a proto-curation: typo(-il)logically prototypical. We could use a more simple framework and just announce a redux of High Wired: On the Design, Use, and Theory of Educational MOOs (1998). The prompts for reading this directory of our collective redux are Movement (Haynes and Holmevik), Justice (Vitanza), Grammar (Butts), Web (Kuhn), Trauma (Sirc), and Reason (Ulmer). Or, if you prefer, we can regress even further and sit in the wings of an Elizabethan theatre and serve as prompters (book-holders) cueing the actors in this six-act play. Perhaps it will be kinder on our readers to set up a virtual teleprompter that gets things moving.

Cynthia whispers: “Cue ‘Teleprompting Élekcriture’”

The teleprompter has become as ubiquitous in politics as it has in entertainment, creating an historical convergence of reading protocols that depend on machine and movement.

Teleprompted discourse is especially critical for politicians who must simulate their oratory skills, and who need to appeal/appear as if they are simultaneously informal and improvising. Such ethos is emblematic of Plato’s concern that writing would merely equip us with the

‘semblance’ of truth; “Once a thing is put in writing, it rolls about all over the place” (Phaedrus). So, too, the 24-hour news cycle (by some accounts less journalism than entertainment) situates the teleprompter both in front of the individual who ‘reads’ to viewers from a vertical syntagmatic streaming text, then reversed toward viewers and placed along the bottom of the screen in a horizontal paradigmatic text scroll that anticipates the next ‘story’ or recaps previous stories.

image

FOX News ticker

There is something primitive (intuitive) about the way words appearm. Conversely, there is something frightening (exhausting) about the way they dis/appear—scrolling upward with alarming speed, with the momentum of history, at the behest of time. In between, we inhabit the scroll bars, the space where movement and moment embrace. We witness language in action, in the languid flow of thought, the lurch of long-winded fragments, and the staccato bursts of out/landish play. We bid farewell to words with each keystroke, watching as they dwindle and fade from view. Imbuing them with invisible protection, we whisper, “may the force be with you.” We imagine them on their way—they travel as image.

image

Star Wars: Episode 1, The Phantom Menace© opening text crawl

Who can forget the opening scene of Star Wars, the text marching into the infinite universe of the Galactic Republic. This filmic device tapped into our cultural experiences of moveable type, such as ticker-tape, cinema marquees, follow the bouncing ball sing- alongs, and vintage newsreel footagen. It joined forces with a simple premise—moving text transforms thought into image and image into memory. It is perhaps uncharacteristic to claim that moving words stay with us longer. But we are interested in the un-character that un-does static print—that imagines us caught in a thicket of the thickest thieves: language and motion.

There is, however, a crucial caveat, or noise, in this system: the material action of writing sets language into motion, whether by programming or raw physicality. Composition happens, to riff on Geoffrey Sirc and Jacques Derrida. And, as it happens, language speaks us and re-members us at the same time (in the same moment). By some accounts, a focus on writing and motion must start by studying the parts of writing we see, such as letters, words,

i.e. printed static texts. John Trimbur argues that “studying and teaching typography as the culturally salient means of producing writing can help locate composers in the labor process and thereby contribute to the larger post-process work of rematerializing literacy” (192). As “the turn-of-the-century Austrian architect and graphic designer Alfred Loos put it so concisely, ‘One cannot speak a capital letter’” (191; qtd in Helfand 50). But Trimbur is narrowly focused on the typographical conventions that “[enable] us to see writing in material terms as letter-forms, printed pages, posters, computer screens” (192), while we are adjusting the focus to capture the images of writing in motion and the momentum that accrues in the backwash of memory. Through the many years we worked in MOOs, we came to understand such synchronous virtual space as a primary location of writing as images in motion. In other words, the appearance and disappearance of language inside a screen, the limits of which were beyond our vision, turned the scrollbar into a memory pole where words

unfurl in the prevailing and transient winds of writing’s warp-speed momentum. Typography became biography—the life-world of writing on the fly.

Though the following exchange occurred in real time on October 9, 1999, it gives readers a sense of what we mean by ‘writing on the fly.’ William Gibson (author of the novel Neuromancer) logged in to Lingua MOO as part of a trAce Writing Community event in the

U.K. We only had 30 minutes notice that he was logging in, so we hastily put out the word to Lingua users. He conversed with players in the MOO and created a ‘battered suitcase’ object into which you could place whatever MOO object you wanted. This is an excerpt of the MOO log that day:

Helen says, "Bill's here" snapdragon waves at Bill_Gibson. Jan waves at Bill_Gibson. Bill_Gibson says, "Hello, this really is Wm. Gibson, tho you won't believe me..."" Cynthia [to Bill_Gibson]: We're honored to have you here at Lingua MOO!

Tzen nods.

traci says, "we're likely to believe just about anything" You laugh at traci.

Mark Cole says, "Hi Bill. Enjoyed the talk downstairs. Any advice for a budding writer of speculative fiction (don't u hate labels?)"

Bill_Gibson says, "Thanks. This is the very last gig on my lightning UK All Tommorrow's Parties tour.""

Helen says, "How would a beginner get that ball of elastic bands going? (Bill's metaphor for writing a novel)"

Helen says, "Anyone want me to buy them a signed book?" Tzen says, "Which book is it?" Nolan . o O ( and pay for it? whooohooo. )

Bill_Gibson says, "Heinlein's advice: write, finish what you write, submit it, submit again when it's rejected.""

Jan smiles.

Helen says, "All Tomorrow's Worlds" You take Neuromancer.

Mark Cole says, "Thanks... have a jelly bean" You hand Neuromancer to Bill_Gibson. Helen says, "Good advice Bill ;-)" Tzen says, "ah."

Cynthia [to Bill_Gibson]: yes, would you virtually sign my virtual copy of your book? :)

image

William Gibson interacting with Lingua MOO users (Oct 9, 1999)

The MOO, as locus and instrument of linguistic register and re-collection, circum/scribes this composite image of writing and memory. Bruce Gronbeck reminds us that Aristotle makes a clear distinction between memory and recollection and tallies the attributes of recollection in his treatise De Memoria, “Recalling is always a matter of reconstructing ‘movement’ or sequences of action” (140; McKeon 451b-453a). For Aristotle, memory stems from recollection as such: “For remembering [which is the condicio sine qua non of recollecting] is the existence, potentially, in the mind of a movement capable of stimulating it to the desired movement, and this, as has been said, in such a way that the person should be moved [prompted to recollection] from within himself, i.e. in consequence of movements wholly contained within himself” (McKeon 452a).

Thus, early on our knowledge of how memory works is derived from Aristotle’s notion of motion contained. In her essay, “Habit as Memory Incarnate,” Marion Joan Francoz explains the containment model, the hydraulic model, and the physiological models of memory, advocating the latter and its association with habit. According to Francoz, “‘Image schemata,’ which Lakoff and Johnson propose as dynamic alternatives to abstract schematic representations in memory, find their most basic manifestation in the spatial aspect of the body, ‘from our experience of physical containment’ (Johnson, Body 21)” (14).

But the movement we have in mind must also be a movement that is enduring, that

gains momentum from the start, that keeps going. Viewed in this way, writing becomes a force, as Cixous writes, with which we contend and by which we leave our own trail of images. The trajectory of this essay follows three moments, or movements, along the trail of images we have left like bread crumbs for ‘the scribe that follows after’ and has somehow re- forged the relation between writing as image and learning via text in motion.

image

MediaMOO MMTV Studio (May 9, 2011;17th anniversary of our meeting on May 9, 1994)

In 1994, when we first met in the text-based virtual community, MediaMOO, we quickly understood the power of writing in motion. The MOO is a blend of text and image, and of orality and literacy. Oral insofar as the interaction among writer/speakers in the MOO reproduces oral conversation via written text, literate insofar as the writing requires fluency to produce meaning. The interesting, and innovative, aspect of this phenomenon is that in the MOO tightening (and blurring) the orality/literacy split is achieved visually. Within months we created our own community using the LambdaMOO database, and within two years of creating Lingua MOO we had published our collection of essays, High Wired (University of Michigan Press), following which we created a graphical web-based interface called enCore Xpress, and soon thereafter, the 2nd edition of High Wired. Our task in the introduction to

High Wired was, we believed, to articulate (insofar as we could) a new name for such writing. We coined the term élekcriture, borrowing from the Greek for the beaming sun (Elektra) and French feminism’s notion of writing (l’ecriture feminine), to describe a thematic conjunction between electricity and the streams of writing that spill forth in a discourse that resists traditional ways of organizing and controlling the flow of conversation.

And even after we combined the textual and graphical registers of meaning- production with a graphical interface that split the text side and the graphical side, élekcriture still dominated the production of meaning. Rhetorically, the design allowed for style to enhance input and for an intertextual-graphical interface to border the space in which learning takes place, while the web-based interface also made many MOO functions easier to learn and execute. But the fact that graphical MOO interfaces such as enCore Xpress had helped move MOO technology along at a pace in concert with other web-based communication software in the late 90s is not central to the idea we are promoting of text as image; we considered it merely a bonus.

image

LinguaMOO graphical interface, enCore Xpress (2005)

Nineteen years ago we got to know one another in language, in real-time. It was both a ‘home’ we could share and one we built for others to enter and build as they saw fit. We were living/writing in a visible text. The question of writing became a manifestation of personal and professional discourses, the crossing of which became for us an invisible boundary—we

did not distinguish between the space of our belonging to one another and to our academic others. It is akin to Bruno Latour’s reminder that “in the eyes of our critics the ozone hole above our heads, the moral law in our hearts, the autonomous text, may each be of interest, but only separately. That a delicate shuttle should have woven together the heavens, industry, texts, souls and moral law -- this remains uncanny, unthinkable, unseemly” (5).

The second moment is really a fast forward ten years when MOOs began to wane as the graduate students who created, administered, and populated them moved on to “real” lives and jobs, and we found other platforms where writing in motion served as our template for play and purpose: Neverwinter Nights, Diablo II, Second Life, and World of Warcraft.

Yet, in citing our own experiences we are somewhat torn. On the one hand, we believe the durability of these texts in motion seals the sagacity of our argument (not to mention the reality of our lives, which is hardly virtual any longer, though we tend not to make that distinction). On the other hand, as rhetoricians we understand the need for a critical eye.

Roland Barthes expressed it in this manner: “…my desire to write on Photography corresponded to a discomfort I had always suffered from: the uneasiness of being a subject torn between two languages, one expressive, the other critical; and at the heart of this critical language, between several discourses, those of sociology, of semiology, and of psychoanalysis…” (Camera 8). This is how we approach writing about writing in visible texts; like Barthes, we are both “Operator” and “Spectator” (9). “The Photograph belongs to that class of laminated objects whose two leaves cannot be separated without destroying them both: the windowpane and the landscape, and why not: Good and Evil, desire and its object: dualities we can conceive but not perceive” (6).

Barthes is instructive in an additional sense—as purveyor of the line between forms of visibility. In the static (print or web) iteration of this history, we understand that we cannot de/pict the motion of text we are de/scribing here. Even a “still” image (i.e., screenshot) of some MOO tran/script does not do justice to the movement experienced as graphé/flux (the flux of moving writing). But we can work with the concept of the photo/graph as theorized by Barthes because he re-animates it in order to ponder our pandemic belief in the invisibility of its animation of us. “Whatever it grants to vision and whatever its manner, a photograph is always invisible: it is not it that we see” (6). “In this glum desert, suddenly a specific photograph reaches me; it animates me, and I animate it. So that is how I must name the attraction which makes it exist: an animation. The photograph itself is in no way animated (I do not believe in ‘lifelike’ photographs), but it animates me: this is what creates every adventure” (20).

There is, then, something that wants animating, that reveals itself when time and motion call certain features of text into the unconcealedness of typorganisms—of writing on

the move. Barthes meets Martin Heidegger at this juncture, redefining the ‘origin of the work of art,’ following the workness until we can see it at work. What Heidegger saw in a pair of worn out peasant shoes, Barthes sees in the instruments of time and photography: “For me the noise of Time is not sad: I love bells, clocks, watches—and I recall that at first the photographic implements were related to techniques of cabinetmaking and the machinery of precision: cameras, in short, were clocks for seeing, and perhaps in me someone very old still hears in the photographic mechanism the living sound of the wood” (15). The third moment along the trail of images comes into view now. Are MOOs and World of Warcraft like clocks for seeing writing? What happens in the seeing of composition as it happens?

It is time—time that moves into a new topos where momentum gathers itself unto itself, where (it turns out) moments are re-turned to time. Who are we to think we owned them in the first place? We are so bound up in our sense of sovereign subjectivity that we dare to preface topos with its own ‘u’—unbounded topos—utopia. But in so doing, we have managed to create every dystopia known to humanity. MOOs and WoW are, thankfully, no utopias; they are more along the lines of what Alok Nandi calls a fluxtopia. According to Nandi: “Virtu/RE/alities explore the gap between virtuals, ideals and realities. Fluxtopia can only be understood in the act of attempting to achieve the traject of any flow. But how do we achieve what we mean by it if we do not know what it is, except that IT is in constant mutation, flowing apart?” (np). Nandi exploits our collective delusion that we can capture the flow of media by setting up various fluxtopic passages designed to foreground both delusion and passage. MOOs and WoW are portals into this “fluxography”; or, as Geoff Sirc might call it, this “fluxus-inflected practice” (“Fluxjoke” 3). The key to understanding how momentum assists memory rests not on the rests, or pauses, we inject in writing and reading, rather in the in/visible border between delusion and passage, one that is (hopefully) not subject to Aristotelian or Platonic border patrols. In synchronous writing environments we are lulled, by the momentum of language, into no complacent region of learning, but an active accumulation of meaning we commonly think of as memory. The movement of language, its marching momentum, lulls us into thinking we are pushing things along, when it is more accurate to say we are being pulled into a remembering machine without being aware of it.

The question is how does momentum and language do this. And here we issue a patch to our earlier thinking on this topic by adding a small “t” to élekcrituretélekcriture. To underscore how télekcriture accomplishes this lulling, we should sample the most basic qualities of flux: rhetoric, rhythm, and reciprocity.

As a rhetorical machine, télekcriture mixes language, writers, and distance, then reconfigures them as sustained contextual real-time interactivity. But distance itself also figures within language. Barthes suggests, as have others over the years, that all language is

rhetorical, that is, it is highly figurative. There are countless ways we attempt to maintain the distinction between two dimensions of language, the literal and figurative; but in the end, language is all figurative (Semiotic 82-93). In short, Barthes argues, “the meta-rhetorical expressions which attest to this belief are countless. Aristotle sees in

it a taste for alienation: one must ‘distance oneself from ordinary locutions we feel in

this respect the same impressions as in the presence of strangers or foreigners: style is to be given a foreign air, for what comes from far away excites admiration’” (88). There is, then, in language itself a dimension of distance, a sense in which words travel across time and distance in order to ‘mean’ something in the here and now. Words exhibit the wear and tear of distance and time, and no amount of anti-rhetorical rhetoric can undermine this fact. But critics like Paul Virilio misdirect their fears at teletechnologies (like MOOs and WoW) in an effort to restore to language (and thus to ourselves) a degree of nearness and sovereignty that seems to have slipped away (when it was never ours to begin with). As Virilio argues, “[b]etween the subjective and objective it seems we have no room for the ‘trajective,’ that being of movement from here to there, from one to the other, without which we will never achieve a profound understanding of the various regimes of perception of the world that have succeeded each other throughout the ages” (24). In short, he laments the “loss of the traveller’s tale” (25), he longs for the “essence of the path, the journey” (23).

Whereas Nandi’s fluxtopia situates the trajective within the work (i.e., the act) of writing, Virilio situates it in the achievement of writing—the having travelled along a path. This is precisely the tension at work in the difference between print and electronic texts, something we think Richard Lanham missed in The Electronic Word, but not something Michael Joyce missed. In attempting to articulate the pulse of Carolyn Guyer’s phrase “tensional momentum,” Joyce finds evidence of a missing rhythm—a rhythm not present, literally, in print texts. But he’s torn, too. “And yet I know, in the way someone watches water slip through sand, that words are being displaced by image in those places where we spend our time online; know as well that images, especially moving ones, have long had their own syntax of the preliminary and the inevitable” (314).

Writing in visible texts, like sand and water, flows at a rhythmic (ragged or silken) pace. In the exchange of languaging beings typing along this tempo-trajectory, reciprocity arises. It is woven by the ‘delicate shuttle’ of an/other interaction—sustained contextual real- time reciprocal interactivity. Reciprocal interaction partakes of a fluidity of movement related to (and determined by) tides and time. The backward (re-) and forward (-pro) movement of the tides, the ebbing and flowing of Oceanus in Homer’s Iliad, lends its sense of fluid and cyclic language to real-time reciprocity. It is constant, continuing without intermission, steadily present, the constancy of real-time. Writing resists slowing down; it has its own force

of forward movement. In digital environments such as MOOs and WoW, this momentum rushes ahead of us and we are merely the scribes following after, somewhat engulfed by/in visible texts and set in motion by our words—in their current—on their way.

Works Cited

Barthes, Roland. Camera Lucida. New York: Hill & Wang, 1981. Print.

. The Semiotic Challenge. trans. Richard Howard. New York: Hill and Wang, 1988. Print. Cixous, Hélène and Mireille Calle-Gruber. Rootprints: Memory and Life Writing. trans. Eric Prenowitz. London: Routledge, 1997. Print.

Francoz, Marion Joan. “Habit as Memory Incarnate.” College English 62.1 (September 1999): 11-29. Print.

Gronbeck, Richard. “The Spoken and the Seen: The Phonocentric and Ocularcentric Dimensions of Rhetorical Discourse.” Rhetorical Memory and Delivery: Classical Concepts for Contemporary Composition and Communication. ed. John Frederick Reyhnolds.

Hillsdale, N.J.: Lawrence Erlbaum, 1993. 139-55. Print.

Guyer, Carolyn. “Along the Estuary.” Mother Millennia . http://www.mothermillennia.org/ Carolyn/Estuary.html (5 June 2005). Web.

Haynes, Cynthia. “In Visible Texts: Memory, MOOs, and Momentum.” The Locations of Composition. Eds. Christopher J. Keller and Christian R. Weisser. Albany, NY: State University of New York Press, 2007. Print.

Haynes, Cynthia and Jan Rune Holmevik. High Wired: On the Design, Use, and Theory of Educational MOOs. 2nd ed. Ann Arbor: University of Michigan Press, 1998, 2001. Print. Heidegger, Martin. “The Origin of the Work of Art.” Poetry Language Thought. trans. Albert

Hofstadter. New York: Harper & Row, 1971. 15-86. Print.

Helfand, Jessica. “Electronic Typography: The New Visual Language.” Looking Closer: Classic Writings on Graphic Design. Vol. 2. Eds. Michael Bierut, William Drenttel, Steven Heller, and D. K. Holland. New York: Allworth, 1997. 49-51.

Joyce, Michael. “Songs of Thy Selves: Persistence, Momentariness, and the MOO.” High Wired: On the Design, Use, and Theory of Educational MOOs. Eds. Cynthia Haynes and Jan Rune Holmevik. 2nd ed. Ann Arbor: University of Michigan Press, 1998, 2001. 311-23.

Lanham, Richard A. The Electronic Word: Democracy, Technology, and the Arts. Chicago: University of Chicago Press, 1993. Print.

Latour, Bruno. We Have Never Been Modern. trans. Catherine Porter. Cambridge: Harvard UP, 1993. Print.

Lingua MOO. http://lingua.utdallas.edu:7000 (1995-2005). http://electracy.net:7000 (19

May 2013). Web.

McKeon. Richard. The Basic Works of Aristotle. New York: Random House, 1941. Print. Nandi, Alok B. Fluxtopia.com http:fluxtopia.com (5 June 2005). Web.

Plato. Phaedrus. Trans. Alexander Nehamas and Paul Woodruff. Indianapolis, IN: Hackett Publishing, 1995. Print.

Sirc, Geoffrey. “English Composition as FLUXJOKE.” Conference presentation delivered at Conference on College Composition and Communication (CCCC). Chicago, 2002.

. English Composition as a Happening. Logan, Utah: Utah State University Press, 2002. Print.

Star Wars: Episode I, The Phantom Menace. Opening Text

Crawl. http://www.starwars.com/episode-iii/bts/production/f20050126/indexp2.html (5 June

2005).

Trimbur, John. “Delivering the Message: Typography and the Materiality of Writing.” Rhetoric and Composition as Intellectual Work. Ed. Gary A. Olson. Carbondale, IL: Southern Illinois University Press, 2002. 188-202. Print.

Virilio, Paul. Open Sky. Trans. Julie Rose. London: Verso, 1997. Print.

Deena Larsen

Access within VR: Opening the Magic Doors to All

Within each new technology lurks hidden obstacles. There are financial barriers to overcome, for those who struggle to put food on the table can not purchase the equipment or spare the time. There are physical obstacles for people who must maneuver this world in ways that differ from the norm. A cry that has often been offered in these situations is that we are working within unique media that simply can not trans(fer)(form) for all situations. Don’t ask the painter to explain art to the blind? Don't ask a symphony to exalt to the deaf? Perhaps. The wilderness is a wild and dangerous place, where only the intrepid can (ad)venture. Yet there are mountain trails with ropes and braille signs designed to provide a taste of the wilderness to the blind or widened slopes to give access to quiet forests for wheelchair users. We need to take a few minutes to explore setting up best practices for access to VR. Let's discuss solutions!

Dene Grigar & Richard Snyder

Metadata for Access: VR and Beyond

Abstract

Interacting with virtual reality (VR) environments requires multiple sensory modalities associated with time, sight, sound, touch, haptic feedback, gesture, kinesthetic involvement, motion, proprioception, and interoception––yet metadata schemas used for repositories and databases do not offer controlled vocabularies that describe VR works to visitors.

This essay outlines the controlled vocabularies devised for the Electronic Literature Organization’s museum/library The NEXT. Called ELMS (Extended eLectronic Metadata Schema), this framework makes it possible for physically disabled visitors and those with sensory sensitivities to know what kind of experience to expect from a VR work so that they can make informed decisions about how best to engage with it. In this way accessibility has been envisioned so that all visitors are equally enabled to act upon their interest in accessing works collected at The NEXT.

Introduction: Proof of Concept

Turning their head slowly, the player spots five neon green pins in the horizon and aims their controller at the one peeking behind the conical dark-green cedar. The player is situated amid a strange, bright blue terrain undulating beneath a cloudy gray and blue sky. In the background they hear voices chattering and laughing softly. Moving their head further to the left, the player sees more green pins hovering over bleak squat buildings and an earth-like, blue globe. It seems like they are walking toward the globe, and as they get closer, they see a bookshelf sunk backwards into the ground. Approaching it, the chattering grows loud and then stops.

This is one of the scenes in Everyone at this party is dead / Cardamom of the Dead by Caitlin Fisher o, one of the first VR literary works produced for the Oculus Rift. Published in 2014 in the Electronic Literature Organization’s Electronic Literature Collection 3 (ELC3), it is now hosted at The NEXT.

Like the 3000 other works of born-digital art, literature, and games that The NEXT holds, Fisher’s VR narrative is presented in its own exhibition space. A carousel of still shots from the work presents visitors with highlights from the work. The description of the work, cited from the ELC3, provides information about the storyline, the artist’s vision, and its

production history. To the right is a sidebar containing the “Version Information”––metadata built on the MODS schema detailing bibliographic information expected from a scholarly database. This information includes the author’s name, date of publication, publisher, and language all associated with the 1.0 version of Fisher’s work. Visitors, however, also see additional information that goes beyond that provided by MODS: the work’s digital qualities, its genre, the sensory modalities evoked when experiencing the work, its accessibility, original media format, authoring platform, and peripheral dependencies. These are controlled vocabularies that move beyond the bibliographic and, instead, provide visitors with the information they need in order to experience the work. In this context, Everyone at this party is dead / Cardamom of the Dead alerts visitors that the work involves kinesthetic involvement, proprioception, sight, sound, graphical and spatial navigation, and that it was built with Unity and requires a VR headset.

About The NEXT’s Extended Metadata Schema

The metadata schema for The NEXT, ELMS or the “Extended eLectronic Metadata Schema,” is the framework developed to provide a common understanding of the highly complex, interactive, digital artifacts, like Fisher’s, held in its collections.

Because The NEXT collects and hosts a wide variety of interactive media pertaining to digital art and writing––the bulk of which it makes freely available for access and download in their original formats or in formats that have been preserved through migration and emulation––its schema both utilizes and extends the Metadata Object Description Schema (MODS) maintained by the Network Development and MARC Standards Office of the Library of Congress. By extending MODS, The NEXT attends to the media specificity of the works, an approach to the analysis of digital objects suggested by theorist N. Katherine Hayles in Writing Machines [26] and also reflected in taxonomies created by the global, scholarly federation, the Consortium on Electronic Literature (CELL), over a decade ago p.

At the heart of ELMS is the contention that visitors accessing a work at The NEXT need to be made aware of its hardware, software, peripheral specifications, and other salient features so that it can be experienced fully. Taxonomies developed for extending MODS include Software Dependency(ies), Authoring Platform(s), Hardware Dependency(ies), Peripheral Dependency(ies), Computer Language(s), Digital Quality(ies), Sensory Modality(ies), and Genre(s).

Equally important, disabled visitors need to know the physical requirements of a work in order to prepare for the experience via assistive technologies and/or other methods. Thus,

ELMS’s metadata has been further extended to meet the needs of disabled visitors and those with sensory sensitivities so that they know the kind of experiences a work involves and can make informed decisions about engaging with it. Specifically, the system, aligned with crip theory and relaxed performance methodologyq, pairs a controlled vocabulary that extends traditional metadata fields to include those related to disability access––what we refer to as sensory modalities–– with descriptive language expressed in Plain/Simple Englishr that further details particular hazards disabled visitors need to know before encountering a work.

Because the participatory, interactive, and experiential qualities of born-digital art, literature, and games involve what Vince Dziekan refers to as “virtuality” and a sense of “liveness” [27], principles underlying the development of the space and the treatment of the works it holds align well with practices associated with live performance. The concept of the performative nature of computers has been raised early on by scholars, such as Brenda Laurel and Janet Murray. Thus, in extending The NEXT’s metadata schema to address a multitude of disabilities and sensory sensitivities, ELMS’s approach to access draws upon the practice of relaxed performance visual story guides, similar to those created for relaxed theater/concert performances, etc., when creating a statement for each work in The NEXT. These statements outline in Plain Language what a visitor can expect from their experience with a work and are tied directly to controlled vocabularies in the metadata that make it searchable and able to be filtered for a customized experience.

A relaxed performance offers a comfortable, welcoming visitor experience that accommodates a wide range of needs. Disabled people and those with sensory sensitivities are able to participate and enjoy an event as valued patrons (“Sensory Relaxed Performances). A common practice for relaxed performances is the distribution of a guide that lets visitors know in advance what to expect at the performance and how it has been modified to accommodate specific needs. In context of The NEXT, the metadata located in the sidebar of an individual work’s exhibition space describes its unique, searchable features. The section called “experiencing the work” that follows the description of a work’s content provides the kind of detailed information, written in plain and clear language, that conveys to the visitor what to expect from the work and when specific actions occur.

Applying ELMS to VR Narratives

Going back to Fisher’s VR narrative, visitors would be alerted to the fleeting text that appears briefly and then disappears. They need to know that text moves across the environment and that the reading time is also brief. If they have color-blindness associated with distinguishing

greens and blues, tritanomaly for example, they may not be able to differentiate easily the color of the pins and of other objects such as the cedar tree, many of which carry important information for navigating the experience. They should be also aware that much of the poetic content is communicated over audio, and that the sound oscillates between soft and loud and, so, could be challenging to sensitive visitors. They would need to know that it is necessary to manage a controller and vibrations occur to signal that the visitor has successfully targeted a green pin. Head movements are also required. Some of the work’s meaning is communicated spatially via perception of artificial depth. Finally, visitors need to be alerted that they may be affected with internal sensations, such as nausea or dizziness, due to the VR experience.

image

The NEXT’s Exhibition Space for Caitlin Fisher’s Everyone at this party is dead / Cardamom of the Dead with Controlled Vocabularies and Statement for Disabled Visitors and those with Sensory Sensitivities

Final Thoughts

The ELMS metadata schema starts with the premise that all visitors to The NEXT need some type of accommodation to access the born-digital works held in its collections, whether it is information relating to the hardware a hypertext novel needs to function or the sensory modalities it evokes as it is experienced. Visitors who use screen readers, for example, should know in advance that they will need this technology to access a net art piece that requires sight; likewise, those who do not have access to an Oculus Rift headset will be informed when a work, like Fisher’s, requires one. In this way all visitors are equally enabled to act upon their interest in accessing works collected at The NEXT.

Acknowledgements

We would like to thank the organizers of Triangle SCI 2022 for providing our team of researchers the opportunity to work together in person during October 2022 on our project “Improving Metadata for Better Accessibility to Scholarly Archives for Disabled People,” which we have drawn upon for this article. We also acknowledge the contributions of our three other team members who hail from the fields of electronic literature, digital humanities, and disabilities justice: Erika Fülöp, PhD, U of Toulouse; Jarah Moesch, PhD, RPI; and Karl Hebenstreit, Jr., MS, Dept. of Education.

Bibliography

Berne, Patricia, Aurora Levins Morales, David Langstaff, and Sins Invalid. "Ten Principles of Disability Justice." WSQ: Women's Studies Quarterly 46, no. 1 (2018): 227-230. doi:10.1353/ wsq.2018.0003.

Chin Natalie M. "Centering Disability Justice." Syracuse L. Rev. 71 (2021): 683.

Kafer, Alison. Feminist, Queer, Crip. Bloomington, Indiana: Indiana University Press, 2013. Laurel, Brenda. Computers as Theatre. NY, NY: Addison-Wesley, 1991.

Murray, Janet. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: The MIT Press, 1997.

Piepzna-Samarasinha Leah Lakshmi. Care Work: Dreaming Disability Justice. Vancouver: Arsenal Pulp Press, 2018.

“Sensory Relaxed Performances: How-To and What To Expect.” Sensory Friendly Solutions.

https://www.sensoryfriendly.net/sensory-relaxed-performances/.

Sins Invalid. Skin, Tooth, and Bone – The Basis of Movement is Our People: A Disability Justice Primer, Reproductive Health Matters, 25:50, 149-150, 2017. DOI: 10.1080/09688080.2017.1335999.

Eduardo Kac

Space Art: My Trajectory

This paper traces the author’s trajectory in space art. It starts in 1986, when he first conceived of a holographic poem to be sent in the direction of the Andromeda galaxy, and continues into the twenty-first century through several works, including Inner Telescope, realized with the cooperation of French astronaut Thomas Pesquet aboard the International Space Station (ISS) in 2017. The author discusses his theoretical and practical involvement with space-related materials and processes. Special attention is given to his space artwork Adsum, conceived for the Moon.

Introduction

I started my career in 1980, with a multimedia practice that integrated poetry, performance, and the visual arts. Beginning in 1982, I pivoted towards an engagement with technology as my creative medium, a sustained orientation that marks its fourth decade in 2022. Albeit lesser known than my other bodies of work, space art has been central to my interests since the early 1980ss. In what follows I will revisit some of the key moments in my space art trajectory.

Ágora: a holopoem to be sent to Andromeda

In 1983 I introduced a new art form that I named holographic poetry, or holopoetry [28], which consisted in the use of unique properties of holography to create poems that floated in the air and changed their configurations according to the relative position of the observer.

One of the fundamental tenets of holopoetry is what I called antigravitropism, i.e., the use of language in a way that does not follow the perceivable effect of gravity on writing. In other words, the creation of works that, albeit produced on Earth, were not limited by the action of gravity on matter because the holopoems were composed of light (i.e., photons, massless particles). This meant that, contrary to telluric objects, the letters and words in the holopoems were anti-gravitropic; they hovered freely outside, inside, or through the surface of the recording medium (i.e., holographic film or glass plate). Through the manipulation of this plasticity I created shape-shifting works; I produced a word-image continuum that, from the

point of view of a moving observer, exists in a constant state of flux. I developed this art form until 1993, resulting in a body of work comprised of twenty-four pieces.

In 1986 I created my first space artwork, a holopoem to be sent in the direction of the Andromeda galaxy (see Kac 1). This work is in Portuguese and is entitled Ágora (agora, in English). In the work itself, we see the word Agora (now, in English) rendered in wireframe. The difference between the two words, in Portuguese, is the acute accent, used to mark the vowel height. With this diacritic mark, the word makes reference to space; without, it makes reference to time. Taken together, they allude to the intertwined relationship between space and time.

image

Figure 20. (Kac 1) Ágora, holopoem conceived to be sent in the direction of the Andromeda galaxy (not launched). Kac, 1986.

As seen in the holopoem, the letters of AGORA (all in uppercase, in order to create a weight equivalence between the letters) are written three-dimensionally with a wireframe font. This enabled all strokes and angles of the letters to be seen simultaneously, dramatizing their immaterial form through emphasis on the outlines. Thus, the ‘emptiness’ of the letters echoes the perceived ‘void’ of space.

Ágora was conceived to be released in space and propelled in the direction of the Andromeda galaxy, like a message in a bottle travelling through the vacuum of space. Ágora

was made with an angle of incidence of 45 degrees. This means that whenever light would shine on the hologram at approximately 45 degrees, the hologram would ‘irradiate’ the word AGORA in wireframe, visible to the naked eye. My vision for this work was that, throughout its trajectory in space, it would function as an ‘intermittent star’: whenever light would strike it at approximately 45 degrees, it would diffract the incoming light and output a wavefront that would be visible as the word AGORA. As it tumbled amid the darkness of the cosmos, it would occasionally ‘emit’ light in different directions, always encoded with the urgency of its message: ‘now’.

Spacescapes

In 1989, I transmitted from Chicago my artwork Spacescapes [30] via Slow-Scan Television (SSTV) simultaneously to Pittsburgh (to the DAX Group) and to Boston (to a local group of artists). The transmission took place in the context of the Three-City Link event, a three-node ephemeral network configured specifically for the event.

SSTV was an early type of videophone that allowed the transmission/reception of sequential still video images over regular phone lines. On average, it took from eight to twelve seconds to transmit each image.

In Spacescapes (see Kac. 2), an alternating sequence of satellite views and microscopic images of digital circuits fused into one another at the receiver's end, forming an electronic palimpsest in which large and small merged.

image

Figure 21. (Kac 2) Spacescapes, slow-scan television, screen, telephone line, satellite and microchip images. Example of a transitional frame as seen by recipients. Kac, 1989.

This work explores the analogy between patterns seen up close at a minute scale and forms revealed at great distances. Spacescapes creatively manipulates an intrinsic characteristic of the system, which was to scan, from top to bottom, the incoming image over the preceding one. As a result, their amalgamation took place at the receiver’s end, producing a continuous transformation of landscapes seen top-down—in which it was very difficult to discern what was the Earth seen from a satellite and what was a microchip seen through a microscope.

Through this work I wanted to convey an aesthetic of magnitudes, alternating perspectives from the inward motion into a microscope to the vantage point above the surface of the Earth, and back again, continuously. The transitions between the two deliver a one-of- a-kind experience, interlaced as they are with the same electronic glow. Ultimately, the uninterrupted fusion of ultra-close and ultra-far images suggests the interconnectedness of the infinitesimal and the monumental, and the awe of our relative position in the world.

Monogram

My ink drawing Monogram [32], which evokes an orbital trajectory, a rising rocket, and a

moon (and is also my emblematic signature), flew to Saturn on the Cassini spacecraft in 1997. Traditionally, a signature is a complement to an artwork, a graphic surplus often placed on the lower right corner of a picture or at the bottom of an object, to indicate authorship and authenticity. However, in the case of Monogram, I elevate the signature to the condition of artwork itself by drawing attention to its visual qualities and semantic resonances. The curlicues of Monogram configure stylized representations of visual elements unique to space exploration (see Kac. 3). Its iterability assures its legibility in the absence of the sender or a specific addressee.

image

Figure 22. (Kac 3) Monogram, Kac's ink drawing, which evokes an orbital trajectory, a rising rocket and a moon (and is also the artist's emblematic signature), flew to Saturn on the Cassini spacecraft in 1997. Cassini entered orbit around Saturn in 2004. Kac, 1996.

The original, wavy ink drawing was digitized and included in a DVD, which was placed between two pieces of aluminum to protect it from micrometeoroid impacts, and mounted to the side of the two-story-tall Cassini spacecraft beneath a pallet carrying cameras and other space instruments that were used to study the Saturnian system. A patch of thermal blanket material was installed over the disk package.

The Titan IVB/Centaur rocket carried the Cassini spacecraft, as they launched from Cape Canaveral Air Force Station's Launch Complex 40, on October 15, 1997. Cassini entered orbit around the giant planet in 2004 and completed 294 Saturn orbits. On September 15, 2017, Cassini deliberately dove into Saturn's atmosphere, burning up and disintegrating, in order to prevent the contamination of Saturnian moons targeted for research on the possibility of life.

This means that the artwork, with each curve sweeping into another, was in deep space

for twenty years, a meaningful fact in itself and also for its symbolism: the presence in the cosmos of a unique physical mark that stands for the individual maker, a personal glyph, a manu propria sign that points to the signer and voluntarily expresses it. A signature is indexical by definition, that is, it is a signifier that is physically connected to the signified, it unequivocally affirms the existence (in the present or the past) of the signee by contiguity. A “signature work” means an emblematic piece, one that epitomizes the aesthetic vision of the artist. The loops and curves of Monogram define, instead, a work-cum-signature, a consistent graphic pattern made of variable twirling traces that, overall, can be repeated.

If today we already travel telerobotically between the planets of the Solar System (with the exception of Voyager, which has flown beyond the heliopause and has entered interstellar space), in the future crewed interplanetary spaceflight will become more common. In this new context, art will be a meaningful participant in the journey. In its singular, swift lines, Monogram seeks to express the vitality of cultural practice in interplanetary space.

The Lepus Constellation Suite

Created, produced and transmitted in 2009 from Cape Canaveral, Florida to the Lepus Constellation, the suite is composed of five line drawings that were also rendered as five engraved and painted steel discs, measuring 20 inches in diameter each [34] [5].

The Lepus Constellation Suite is part of a larger series entitled Lagoglyphs, ongoing since 2006, in which I develop a leporimorph or rabbitographic form of writing. The larger series includes prints, murals, sculptures, paintings, an algorithmic animation, and satellite works created specifically for visualization in Google Earth (more on the latter below). As visual language that alludes to meaning but resists interpretation, the Lagoglyphs series stands as the counterpoint to the barrage of discourses generated through, with, and around my GFP Bunny (a green-glowing transgenic bunny, called Alba, that I created in 2000, and that has been featured in exhibitions and publications worldwide).

The pictograms that make up the Lagoglyphs are visual symbols representing Alba rather than the sounds or phonemes of words. Devoid of characters and phonetic symbols, devoid of syllabic and logographic meaning, the Lagoglyphs function through a repertoire of gestures, textures, forms, juxtapositions, superpositions, opacities, transparencies, and ligatures. These coalesce into an idioglossic and polyvalent script structured through visual compositional units that multiply rather than circumscribe meanings.

Composed of double-mark calligraphic units (one in green, the other in black),

the Lagoglyphs evoke the birth of writing (as in cuneiform script, hieroglyphic orthography,

or ideography). However, they deliberately oscillate between monoreferentiality (always Alba) and the patterns of a visual idiolect (my own). In so doing, the Lagoglyphs ultimately form a kind of pictorial idioglossia or cryptolanguage.

In the specific case of The Lepus Constellation Suite, the five lagoglyphic messages were transmitted towards the Lepus Constellation (below Orion) on March 13, 2009, from Cape Canaveral, Florida (see Kac. 4). The transmission was carried out by Deep Space Communications Network, a private organization near the Kennedy Space Center. At a frequency of 6105 MHz, the transmission was accomplished through high-powered klystron amplifiers connected by a traveling wave-guide to a five-meter parabolic dish antenna. Based upon its stellar characteristics and distance from Earth, Gamma Leporis (a star in the Lepus constellation that is approximately 29 light-years from Earth) is considered a high-priority target for NASA's Terrestrial Planet Finder mission. The Lepus Constellation Suite will arrive in its vicinity in 2038.

image

Figure 23. (Kac 4) The Lepus Constellation Suite, 5 engraved and painted steel discs (20 inches diameter each) with lagoglyphic interstellar messages transmitted to the Lepus Constellation on March 13, 2009 from Cape Canaveral, Florida. Illustrated is disc #3. Kac, 2009.

Lagoogleglyphs

Another suite of works in the Lagoglyphs series is entitled Lagoogleglyphs (2009-ongoing)

[36] [6], space artworks that inscribe pixelated lagoglyphs (my abovementioned green rabbit glyphs) onto the environment and make them visible to the world through the perspective of satellites. These pixelated artworks are created at a global scale and can be experienced in person at their respective venues, directly via satellites, or through Google's geographic search engine (hence their name). In the latter case, the viewer may choose to see the work in one of the following three options:

  1. the familiar Google Maps (in satellite view),

  2. Google Earth (which can be accessed by typing “Google Earth” on a web browser) or

  3. the equally free Google Earth Pro app (which has the additional feature of allowing the viewer to see a map over time by activating the Historical Imagery slider).

In addition to the distributed artworks (seen in person; online; from space), I have created a video for each individual Lagoogleglyph by capturing, in Google Earth Pro, the view from space all the way down to the eye of the rabbit glyph on Earth (and back again to outer space). The videos loop, are silent, and average one minute in duration. Between 2009 and 2022, I have created five Lagoogleglyphs (and their respective videos) in the following locations: 1) Rio de Janeiro; 2) Mallorca; 3) London (see Kac. 5); 4) Strasbourg; and 5) Geneva. The videos #1 through #4 were exhibited together, for the first time, at the Venice Biennale, from April 20 to November 27, 2022.

image

Figure 24. Lagoogleglyph 3, space artwork realized in London to be seen by satellites, to be experienced in person and/or through Google Maps (satellite view), Google Earth or the

Google Earth Pro app. It measures 20 x 15m (65.6 x 49.2 ft). Kac, 2018.

Lagoogleglyph 1 was implemented on the roof of the art center Oi Futuro, in Rio de Janeiro, in 2009, as part of my solo exhibition Lagoglyphs, Biotopes and Transgenic Works, curated by Christiane Paul, on view at Oi Futuro from January 25th to March 30th, 2010. Printed on a large, polygonal canvas measuring approximately 8 x 17 meters, it covered the entire roof of the building. For the inaugural work in the series, I custom-ordered a WorldView-2 satellite photograph, which was subsequently incorporated by Google into its search engine by pulling it from the DigitalGlobe catalogue. Even though the roof installation was ephemeral, the work still remains visible in Google Earth Pro. To see it, the reader is encouraged to drag the Google Earth Pro time slider to the date of January 2010. The time slider is accessible through a topbar icon that consists of a clock capped by an arrow pointing counterclockwise. The original Lagoogleglyph 1 canvas, together with documentation material, is in the permanent collection of the Museu de Arte do Rio-MAR, Rio de Janeiro.

Lagoogleglyph 2 was also printed on canvas. This time, the work measured approximately 10 x 12 m (32 x 34 ft) and was displayed on the roof of Es Baluard Museum of Modern and Contemporary Art, Palma de Mallorca, Spain, in 2015. The work was commissioned by the museum and is also in its permanent collection. Its image was captured by the WorldView-3 satellite.

Lagoogleglyph 3 and Lagoogleglyph 4 were both made and exhibited in 2018; the former in London and the latter in Strasbourg. This time, instead of rooftops, both works were installed on the ground and were composed of grass and field marking paint. In addition to their distinct compositions, they also differ in scale and execution. Lagoogleglyph 3 measured 20 x 15 m (65.6 x 49.2 ft). It was painted directly on the grass at Finsbury Park, London, on the occasion of my solo exhibition Poetry for Animals, Machines and Aliens: The Art of Eduardo Kac, realized at Furtherfield, an art center located at Finsbury Park, from April 7th to May 28th 2018, and curated by Andrew Prescott and Bronac Ferran.

Lagoogleglyph 4 measured approximately 8.5 x 4.2 m (28 x 14 ft). It was made of sod squares and installed in the garden of the art center Apollonia – European Art Exchanges, in Strasbourg.

Lagoogleglyph 5 was installed in the Cimetière de Plainpalais, generally known as Cimetière des Rois, in Geneva, in the context of the group exhibition Open End 2, from September 15 to January 31, 2022, organized by Vincent Du Bois. The Cimetière des Rois (Cemetery of Kings) is renowned for being the final resting place of notables such as Jorge Luis Borges and Jean Piaget, and for hosting group shows with artists such as Sophie Calle and Olafur Eliasson.

Inner Telescope

After ten years of work as artist-in-residence at the Observatoire de l'Espace (Space Observatory), the cultural lab of the French Space Agency (CNES), in 2017 my artwork Inner Telescope was realized on the International Space Station (ISS) with the cooperation of French astronaut Thomas Pesquet (see Kac. 6). Inner Telescope was specifically conceived for zero gravity and was not brought from Earth: it was made in space by Pesquet following my instructions. The fact that Inner Telescope was made in space is symbolically significant because humans will spend ever more time outside the Earth and, thus, will originate a genuine new culture in space. Art will play an important role in this new cultural phase. As the first artwork specifically conceived for zero gravity to be literally made in space, Inner Telescope opens the way for a sustained art-making activity beyond our terrestrial dwelling.

Inner Telescope was made from materials already available in the space station. It consists of a form that has neither top nor bottom, neither front nor back. Viewed from a certain angle, it reveals the French word “MOI“ [meaning “me”, or "myself"]; from another point of view one sees a human figure with its umbilical cord cut. This “MOI“ stands for the collective self, evoking humanity, and the cut umbilical cord represents our liberation from gravitational limits. Inner Telescope is an instrument of observation and poetic reflection, which leads us to rethink our relationship with the world and our position in the Universe.

In the course of developing the work, I created a protocol for its fabrication aboard the ISS, which I personally transmitted to Pesquet in 2016 during our work session at ESA’s European Astronaut Centre, a training facility in Cologne.

image

Figure 25. (Kac 6) Inner Telescope in the cupola, ISS. Kac, 2017.

In addition, I also created a separate protocol for the video documentation of the work aboard the ISS. From the raw footage produced by Pesquet I edited a 12-min video, which is an artwork in itself; in it we see Inner Telescope being made in the Columbus module, its perambulation through the station, away from the module and in the direction of the cupola, and finally its arrival at the cupola with the Earth in the background. I published this video in a limited edition of five copies. The video Télescope intérieur (Inner Telescope) is in the permanent collection of Les Abattoirs, Museum - Frac Occitanie Toulouse, a public institution that houses both a French museum and the Regional Fund for Contemporary Art. I have made additional artworks in the Inner Telescope series, including drawings, photographs, prints, embroideries, installations, and artist’s books.

The project also included the documentary film "Inner Telescope, a Space Artwork by Eduardo Kac", directed by Virgile Novarina (French, with English subtitles, 2017). Since its release, the documentary has been continuously screened internationally at museums, theaters and other places, including notable venues such as the Louvre Museum, Paris. The film was published as a DVD in 2017t. In addition, the bilingual book Eduardo Kac: Télescope intérieur / Inner Telescope was edited by Gérard Azoulay and published by the Observatoire de L'Espace/CNES, Paris, in 2021 [39].

My Space Poetry manifesto was published in 2007 [40], when I started to work on Inner Telescope. In 2017, I finally realized the dream of challenging the limits of gravity I had pursued for more than thirty years: the creation, production, and experience of a work directly in outer space. The astronaut's mission was entitled "Proxima" and was coordinated

by the European Space Agency (ESA). Inner Telescope was coordinated by L'Observatoire de l'Espace, the cultural lab of the French Space Agency.

Adsum, an artwork for the Moon

Conceived for the Moon, Adsum is a cubic glass sculpture inside of which four symbols are laser engraved (see Kac. 7). The cube measures 1x1x1cm (0.4x0.4x0.4”). The symbols are positioned one in front of the other, thus forming a spatial poem inside the solid glass cube that can be read in any direction [41]. ‘Adsum’ means ‘Here I am’ in Latin, as used to indicate that the speaker is present (equivalent to the exclamation ‘here!’ in a roll call).

image

Figure 26. Adsum (in progress), space artwork (laser-etched optical glass), 1x1x1cm (0.4x0.4x0.4"). Kac, 2022.

To create this space poem, I developed a new typeface in which the letter “N” takes the form of an hourglass and the letter “S” has the shape of the infinity symbol. This makes the work legible from any point of view within the cube. The two other letters, which stand between “N” and “S,” are a lowercase “o” and an uppercase “O” (evoking the Moon and the Earth, respectively). Taken together, it is always possible to read either “NoOS” or “SOoN” in three dimensions.

In addition, the design and spatial arrangement of the letters also produce a purely visual

experience: a reversible transition from hourglass (representing human experience of time) to infinity (representing cosmic time). The shift in scale from the lowercase 'o' to the uppercase 'O' suggests a zoom effect going from time as apprehended by human cognition to the temporal expanse of the universe (and vice-versa)u.

Adsum flew on an Antares 230+ rocket from Wallops Flight Facility, Virginia, to the International Space Station on February 19, 2022. The artwork was aboard Cygnus NG-17 (Northrop Grumman-17), a cargo resupply mission of the Northrop Grumman Cygnus spacecraft to the ISS under the Commercial Resupply Services (CRS) contract with NASAv. Adsum was housed in the Columbus module of the ISS.

Adsum’s journey to the ISS in 2022, traversing anaerobic, radioactive coldness, was a test to confirm its readiness for space flight. Adsum will progressively approach the Moon in three additional steps, each with its own visual and material version: 1) Adsum (regex version) is composed of typographic characters and is designed to orbit our nearest celestial neighbor, in digital form, on a USB drive aboard the Orion spacecraftw; 2) Adsum (planar version) will arrive on the Moon aboard Intuitive Machines’ Nova-C lander, etched on a Galactic Legacy Labs’ nickel nanofiche disk; 3) Finally, Adsum (lander version), identical to the cubic glass sculpture that flew to the ISS, will be aboard an Astrobotic lander that will arrive on the Moon NET 2023. As a result, both the planar and the sculptural versions of Adsum will literally be on the Moon, there staying for endless time, protected from the harsh lunar environment inside their respective landers, awaiting discovery by future space explorers—possibly inhabitants of the first lunar settlements.

In order to communicate the work’s message on Earth, I have created a series of pieces that can be exhibited together or separately, including a limited edition of the laser-engraved glass cube itself, dozens of ink drawings, and a looping video in which we see the minute cube up close, continuously turning to reveal its multiple meanings, with the myriad reflections and refractions of the symbols adding a unique aesthetic quality to the experience. Adsum embodies and expresses the fugacity of the human condition and our awe before the cosmos.

Conclusion

As demonstrated in the preceding pages, since the 1980s I have been theorizing and producing art and poetry that challenge the limits of gravity. It is my conviction that space art can be pursued in many different ways, all equally valid in their respective approaches.

However, in light of the fact that what enables space exploration is its underpinning material

reality, it is clear that art that directly engages with the technologies of space possesses a particularly distinct characteristic. Not in the sense of style or form, but in the sense of its contiguity with human presence and agency outside of our home planet. Making art on Earth through the use of space media (such as satellites), making art directly in space (in Earth’s orbit or beyond), or making art on Earth specifically to be flown to space — all are modes of creation and production that correspondingly have the symbolic and factual meaning of pointing to a future in which art and space exploration are intrinsically, and routinely, intertwined. Ultimately, art that directly engages with the technologies of space has the potential to contribute to the creation and development of what we may call “space native” culture—one created in space and for space.

Fabien Benetou

Why PDF is the wrong format to bring text to XR and why the Web with proper provenance and responsive design from stylesheets is all we need

For the Future of Text numerous discussions started on the premise that PDF is an interesting format to bring to VR or AR.

This is the wrong question. It assumes a medium can be transcluded in another. It assumes that because VR or AR or here XR for short has been named “The Ultimate Display” in 1965 Ivan Sutherland, it could somehow capture all past displays, and their formats, meaningfully.

Even though XR eventually could, we are not actually watching movies today that are sequentially showing pages of books. Rather we are getting a totally new experience that is shaped by the medium.

So yes, today, we can take a PDF and display it in XR, showing page after page as just images at first and try to somehow reproduce the experience of reading in a headset. It could open up a lot of new usages because, unlike with a television or screen we can actually interact back. We can write back on the content being displayed. Yet, what is the very reason for a PDF to exist? A PDF or Portable Document Format exists to be the same on all devices. It is a format used not be interacted with but rather be displayed untouched, verbatim. It has been somehow modified recently to allow the bare minimum of interaction, i.e signature, while remaining integrity for the rest of the document. This has tremendous value but begs the question, why would one want this in a spacial world? What is the value of a document keeping its shape, namely A4 or Letter pages, while the entire world around it can be freely reshaped? What is the value of a static document once interactive notebooks allowing one to not just "consume" a document but rather play with it, challenge it, share it back modified?

PDF does provide value but the value itself comes from a mindset of staticity, of permanence, of being closed.

The reality of most of our daily life, our workflow, is not that static. A document might be read printed in A4 or Letter yes but it might just as well be read on a 6.1" portrait display to an A4-ish eink device to a 32" 4K landscape monitor. Should the document itself remain the same or rather should its content adapt to where and how one wants to consume and eventually push back on it?

I would argue that any content that is not inviting annotation or even better the actual attempt at existing in its target context is stale. Beyond that it is not promoting hermeneutics or our own ability to make sense of it. Rather, it presents itself as the "truth" of the matter, and it maybe very well be, but unless it can be challenged to be proven as such, it is a very poor object of study.

Consequently a PDF, like a 4.25 x 6.87" paperback is a but a relic of an outdated past. It is an outdated symbol of knowledge rather than a current vector of learning.

The very same content could using HTML provide the very same capabilities and more. An HTML page can be read on any device with a browser but also much beyond. An HTML page with the right CSS, or cascading stylesheets, can be printed, either actually printed to paper or virtually to a document, including a PDF or an ePub, and thus become something static again. With the right stylesheets that document could look exactly like the author wants on whatever devices they believe it would be best consumed yet without preventing the reader from consuming it the way they want, because they have a device nobody else has.

So even though HTML and PDF can both be brought within XR, one begs for skeumorphism. The PDF is again, by what it claims to be its intrinsic value, trapped in a frame. Bringing that frame in XR works of course but limits one can interact with it.

Consequently focusing on bringing PDF to XR means limiting the ability to work with text. HTML, especially when written properly, namely with tags that represent semantics rather than how to view the content, insure that this is properly delegated to stylesheets is not trapped in skeumorphism. The content from an HTML document, in addition to being natively parseable by browsers that are already running on XR devices, can then be shapped to the usage. It can also be dynamic, from the most basic forms to image maps to 3D models that can in turn be manipulated in XR to, last but not least, computational notebooks. While PDF are static in both shape and execution model, namely none, an HTML document can also embed script tags that can modify its behavior. That behavior allows the intertwining of story and interaction. The content then is not just a passive description delegating, poorly as argued before due to the minimum ability to modify it while reading it, the interpretation to the reader but practically makes the exploration of complex system impossible. An HTML document in contrast can present the content so that the system itself being studied can be embedded and thus run, not through the mind of the reader, but actually run. The simulation become the content letting the reader become an explorer of that content and thus able to try to understand much richer and complex systems while confronting their understanding to the truth of that system.

Unfortunately even though there exists today a solution for true responsiveness of 2D content, namely stylesheets, this is not true of 3D content, even less spacial content that could

be manipulated in VR or AR or both. True responsiveness remains challenging because interactions are radically different and the space in which one has such interactions are also radically different. A 6.1" portrait display, an A4-ish eink device or a 32" 4K landscape monitor are still in the end flat surfaces one can point at, scroll within, etc. Reconsidering this and more in both a physical room and a virtual one, eventually with some understanding (e.g flat surface detection for floor and walls), leads to a richness of interactions vastly different. Consequently one must not just consider how to reflow a 2D document from a rectangle to another rectangle but rather to a partly filled volume. Currently there is no automated way to day so beside display skeumorphically the document in the volume. This works but is not particularly interesting, the same way that one does not watch a movie showing pages of a book, even a good book. Instead, being serious about picking a document format, being PDF, HTML, ePub or another, means being serious about the interactions with that document and the novel interactions truly novel interfaces, like VR and AR, do bring.

Assuming one still does want to bring 2D documents to a volume, the traditional question of provenance remains. As we bring a document in, how does the system know what the document is, its format in order to be displayed correctly but also its origin and other metadata? The Web did solve most of that problem through URIs and more commonly URLs, or DOI being looked up to become URLs pointing to a document, either a live one or the archive of one. The Web already provides a solution to how the content itself can move, e.g redirection, and browsers are able to follow such redirection to provide a pragmatic approach to a digital World that changes over time.

The question then often becomes, if formats already exist, if provenance can be solved, is there not a risk to point only to live documents that can become unaccessible? That is true but unfortunately death is a part of life. Archiving content is a perpetual challenge but it should not come at the cost of the present. For that still though mechanisms are already in place, namely local caching and mirroring. Local caching means that once a document is successfully accessed the reading system can fetch a complete or partial copy then rely on it in the future if the original document is not available. PWA or Progressive Web Applications feature such a mechanism where the browser acts as a reader of documents but also a database of visited pages, proxying connections and providing a fallback so that even while offline, content that is already on the device remains accessible. Finally mirroring, centralised or not, insure that documents do remain accessible if the original source is not available for whatever reason. The fact that most websites do not provide either PWA or downloadable archives for efficient mirroring is in no way a testimony that the Web does not have the capacity for resilience, only that good practices for providing documents over time are not yet seen as valuable enough. Luckily efforts like the Internet Archive do mirror content even

while the original owner has made no effort to make their content more resilient. Finally technical solutions like IPFS, or the InterPlanetary File System, make replication across machines more convenient and thus more reliable, again despite more authors not putting the necessary care into having their work remaining available beside providing them to a third party that will archive without necessarily facilitating access.

Finally, being PDF, HTML, ePub or another format, the focus hitherto has been on bringing text, thus 2D, even arguably 1D if seen as a single string, to a volume, thus a 3D space with, i.e AR, or without, i.e VR, context. Even though this provides a powerful way to explore a new interface, XR, we must remain aware that this is still a form of transclusion. We are trying to force old media in a new one and thus will remain a limited endeavor. Yes it would surely be interesting to bring the entirety of Humanity's knowledge to XR but is it genuinely a worthwhile pursuit? Past media still exist alongside XR and thus allow use, either while using XR (e.g using a phone or desktop screen while using AR or a collaborative experience with one person in VR and another video calling from a museum) or before and after it (e.g using a desktop to prepare a VR space then share it after) ... or even through our memory of it. Consequently even without any effort of bringing the content in XR, it does remain accessible somehow. The question rather could become, what native to 3D format could better help to create novel usages, based or not on older format. For this there are already countless solutions as 3D software long predates XR. That said 2 recent formats did emerge, i.e glTF or USD, Graphics Language Transmission Format and Universal Scene Description. Both are roughly equivalent but glTF, beside relying on the most popular Web format for data, namely JSON, already provides community extensions. This I believe is the most interesting aspect. glTF does not try to be encompassing but rather provide the minimum feature set then one can build on it for their own usage. That means there is an escape valve allowing to be readable by all other software but if one does find it insufficient can build on it and adapt it to their needs. This means glTF could become a format not just to exchange 3D models to display manipulable objects in XR but finally that such objects could address the points touched on before, namely text as a primitive, its provenance explicit.

Fabien Benetou

The Case Against Books

{Analysis: https://fabien.benetou.fr/Analysis/Analysis}

Books are amazing. Books are compact affordable ways to help Humanity extract itself from a naive state of Nature.

Yet... books are terrible. Books actually were amazing centuries ago. Books are symbols of knowledge in the sense that as we look at a book we imagine how it will helps us learn. Yet, the truth is far remote from it. Books can be terrible, with poorly written content or even arguably worst, beautifully written content is either factually wrong or deceiving.

Books were once the state of the art of conveing knowledge. That time is long gone, if it actually ever existed. Books are terrible because they give the sense of learning. They give the impression that because one has read about a topic, they are now knowledgeable about it. And yes, imagining that if one knows absolutely nothing about a topic, even the most modest book can improve the state of knowledge of that reader. Yet, is it actual knowledge of the topic or rather the impression of it? The only way to validate or invalidate that claim is to test against reality. The only way to insure that one did learn from a book is to check that newly acquired knowledge against the object of the topic itself. That means the reader must not just read but rather test. This can be relatively inconvenient, for example of the topic of the book is the temperature of the Sun the the reader would need a complex apartus, e.g a spaceship, to go and measure. This instead of often delegated to exercises, end of chapters questions with answers from the author. The reader instead of reading what the author wrote then have to temporarily let go of the book and use their own memory of the content of the book then try to see how that knowledge can help solve the challenge. This can be assimilated to a simulation, the reader tries to simulate the topic and solve. This already shows a very different way to interact with a book then "just" reading. Yet, this leaves much to be desire in the sense that the answer provided is often succint. The reader verifies that their answer matches the one of the author. If it is correct they assume they know. A great exercise will provide ways for the reader to actually verify on their own, like a mathematical proof done 2 different ways, that the result they find is indeed correct. This though entirely redefine both the consumption and creation of a book. At that point a book is not anymore a thing to read but rather simultaneously a thing to read and a thing to exercise with.

This is a delicate situation for everyone involved. Designing exercise that are genuinely bringing the person involved to a better understanding without the ability to correct on the way is not the same skill as writing. Also having the confidence in launching oneself in exercises is vastly more demanging that reading a sequence of words and assuming they are indeed interpreted in a way that the writer would find correct. That means a traditional book to read is fundamentally different from what is usualliy refered to as a textbook. Yet, the very fact that expensive textbooks are the basis of classes, the one place and moment in time dedicated to learning, is not random. Over time the consensus has been that a book itself is not sufficient, rather it is a text intertwined with checkpoints that can validate or at least invalidate the acquisition of that knowledge that is superior. Most textbooks though are not consumed outside of the classroom. This begs the question of why. How come, if a textbook is generally regarded as superior, it is limited to a classroom whereas anybody at anytime could use it?

The hypothesis here is that both designing and actually learning from a textbook is more demanding than solely reading from a book. Consequently the classroom provide support in terms of direct help from the teacher and also motivation from a broader curriculum with social markers like a diploma. Yet, textbook in or outside a classroom themseves are also relics of the past. For decades now the computer provides a new way to both design and consume textbook. Namely that a textbook can now provide not just an intellectual environment to run exercises inside of but rather a computational environment.

A modern text provides the text, the exercises but also the computational environment to complete exercises. This sounds like a minor technical improvement but it is a radical difference because that environment becomes reality to the reader. The reader now has a place, even though an imperfect one in the sense of being simplified, where they can test their knowledge. This is a fundamental difference because the reader is not bounded anymore but the challenging yet very limited space offered by exercises and their solution. Instead the reader can complete carefully crafted exercises but also everything in between. Exercises become ways to efficiently navigate through concepts the author believe as essential but nothing more. The environment provided is of incredible value to the reader.

So yes, a book is an amazing device. It has tremendously helped us to progress due to compactness and now affordability. Today though a book is not sufficient anymore except for the pleasure of reading itself. As a device to improve knowledge the book is outdated. The book should instead become computational notebooks providing environments to explore, to learn from the reality of the topic.

Finally, if that is truly the case, how come computational notebooks are not prevalent in every field? A simple answer would be that progress takes time and that author of books

might not have the skills needed to design computational notebooks. If so, time will hopefully solve that issue. A more subtle challenge though might be that the challenge of accepting to be challenged through exercises is intelectually and emotionally challenging. It requires one to be humble to let reality, even in the form of a simulated one, to push back. It always feels easier to assume one know versus discovering that no, truly, one does not. This form of interactivity can be seen as a spectrum. From consuming passively a medium, being a book to a movie, to consuming it actively while annotating it individually or socially, a form of hermeneutics, to finally interacting with the medium itself. That spectrum of interactivity might not be solely correlated to the depth of knowledge acquire but also the decision fatigue one must go through in order to complete such challenges.

If computational notebooks should replicate books as the new medium to acquire knowledge, we must remain aware of how both designing and consuming them is genuinely more demanding to everyone. Hard fun remains hard but the agency it brings to both is a truly beautiful prospect for a learning society.

Fabien Benetou

Interfaces all the way down

How prototyping and VR go hand in hand to explore the future of text

This presentation will explore through one online experience-as-toolkit why interfaces are so precious.

We are navigating our offline and online lives constantly through interfaces. Some are visible and explicit like the table of content of books or the API, or Application Programing Interfaces, of software libraries while others, like our worldview or virtual reality headsets remain implicit and transparent.

Designing and using interfaces is not trivial and arguably some of the most pressing challenge on how to interact with text in all its forms. The experience while showcase its own scaffolding in order to invite modifying itself. The objective is, without being fully implemented yet, to question if computational notebooks truly are the future of text and if so, how if VR is our currently most advanced interface to information can the two become coupled to provide the best interface to discovering and sharing knowledge.

Fabien Benetou

Stigmergy Across Media

There is nothing to do to think. One just has to be faced with a random of the countless problems we face daily and the brain does its thing, trying to solve it however it can. The process seems seemingly transparent, simple even because we just do it, constantly. Yet when one has to solve a complex problem, one that arguably does not "fit" in their head, thinking takes other forms than an invisible process going through a single head. Thinking extends itself through media, being through voices in a heated debate to paper on a poster in an academic conference to a research paper or in a computation notebook.

As we look at the extensions of thoughts, being a printed article, a data visualization, an audio recording of a debate, etc we often look at it as a record. That is only partly correct in the sense that yes it is a trace of the thought on a medium but it is most than that for the author at least. Beyond just a record or a trace, it is a vestige of past live thoughts in the making. What it means is that the very action of putting thoughts down on a medium, whichever it may be, does help the thinker to think further.

Feynman reacted with unexpected sharpness: “I actually did the work on the paper,” he said.

“Well,” Weiner said, “the work was done in your head, but the record of it is still here.”

“No, it's not a record, not really. It's working. You have to work on paper and this is the paper, Okay?”

James Gleick

We must stop limited an artefact to just conveying meaning. We must stop limit the perception of an artefact as a way to solely convey meaning but rather always as an intellectual stepping stone as it lead to a genuinely new thought that was hitherto impossible until then.

Writing, sketching, programming or waving hands in VR, does not actually matters. It is not the preferred medium per se that makes a difference in order to reach furthest thoughts. What does matter is actively doing something about the problem on a medium, so stigmergy with one self and optionally others. This specific act is extremely powerful creates the

potential for us individually and collectively to move forward, wherever we might decide to go.

Author’s original note in email

I share this because I imagine most people checking the book cover of Drawing Thought [44] would imagine it's about illustration but, just like I was arguing the prototype itself doesn't matter, I believe the drawing itself here doesn't matter anymore after, only that it lead to a genuinely new thought that was hitherto impossible until then.

Also I believe drawing, in the case of Kantrowitz, or writing, in the case of Feynman, or waving hands in VR for us and others, does not actually matters. What does matter is doing something about the problem on a medium, so stigmergy with one self and optionally others. This specific act is extremely powerful and as Frode you repeat to us, nearly ad nauseam when asking for articles we can then reference, creates the potential for us individually and collectively to move forward, wherever we might decide to go.

Editor’s note

Also consider Drawing a Hypothesis: Figures of Thought [45] and to a degree, Lines of thought: Drawing from Michelangelo to now [46].

Fabien Benetou

Journal : Utopiah/visual-meta-append-remote.js

Not very helpful for publication in a PDF but at least demonstrate a bit how part of the poster (or another sliced document) can be manipulated in social VR. Would be better I didn’t let it go through the wall or if another avatar was present to better illustrate the social aspect but at least it is somehow captured.

Also here is the code to save back some meta-data, e.g in VR world position, in visual- meta in an existing PDF on a remote server https://t.co/yYH9yuSkUs as I noticed the other one is in the PDF of the preview of the journal issue.

It’s challenging to capture it all as its constantly changing but I’m dearly aware of the value of it, having traces to discuss on and build back on top thanks to that so precious feedback, constructive criticism and suggestion to go beyond.

~~~~~ code sample ~~~~~

const fs = require('fs');

const bibtex = require('bibtex-parse');

const {PdfData} = require( 'pdfdataextract'); const {execSync} = require('child_process'); const PDFDocument = require('pdfkit'); const express = require("express");

const cors = require("cors"); const PORT = 3000

const app = express(); app.use(cors());

app.use('/data', express.static('/'))

const doc = new PDFDocument(); let original = '1.1.pdf'

let newfile = '1.2.pdf'

let startfile = '/tmp/startfile.pdf' let lastpage = '/tmp/lastpage.pdf'

let stream = doc.pipe(fs.createWriteStream(lastpage)) let dataBuffer = fs.readFileSync(original)

var newdata = ""

/* client side usage :

*

*/

function addDataToPDFWithVM(newdata){ PdfData.extract(dataBuffer, {

get: { // enable or disable data extraction (all are optional and enabled by default) pages: true, // get number of pages

text: true, // get text of each page metadata: true, // get metadata info: true, // get info

},

}).then((data) => {

data.pages; // the number of pages data.text; // an array of text pages

data.info; // information of the pdf document, such as Author data.metadata; // metadata of the pdf document

var lastPage = data.text[data.pages-1]

bibRes = bibtex.entries( lastPage.replaceAll("¶",""))

newContent = lastPage.replace("@{document-headings-end}","@{fabien-

test}"+newdata+"@{fabien-test-end}\n@{document-headings-end}") doc

//.font('fonts/PalatinoBold.ttf')

.fontSize(6)

.text(newContent, 10, 10)

.save doc.end();

execSync('pdftk '+original+' cat 1-r2 output '+startfile) stream.on('finish', function () {

execSync('pdftk '+startfile+' '+lastpage+' cat output '+newfile)

})

sseSend('/'+newfile)

});

}

var connectedClients = [] function sseSend(data){

connectedClients.map( res => {

console.log("notifying client") // seems to be call very often (might try to send to closed clients?)

res.write(`data: ${JSON.stringify({status: data})}\n\n`);

})

}

app.get('/streaming', (req, res) => {

res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Content-Type', 'text/event-stream');

//res.setHeader('Access-Control-Allow-Origin', '*');

// alread handled at the nginx level res.setHeader('Connection', 'keep-alive'); res.setHeader('X-Accel-Buffering', 'no');

res.flushHeaders(); // flush the headers to establish SSE with client

res.write(`data: ${JSON.stringify({event: "userconnect"})}\n\n`); // res.write() instead of res.send()

connectedClients.push(res)

// If client closes connection, stop sending events res.on('close', () => {

console.log('client dropped me'); res.end();

});

});

app.get('/', (req, res) => { res.json('vm test');

});

app.get('/request/:id', (req, res) => { const {id} = req.params; console.log(id)

res.json({"status":"ok"}); addDataToPDFWithVM(id)

})

app.listen(PORT) console.log("listening on port", PORT)

~~~~~ end code sample ~~~~~

Frode Hegland

The state of my text art + the journey to VR

At the close of 2022, the year before I expect text in VR (including AR) to take off, I thought I should take stock of where my own text systems are and where I plan to go. There are a few tweaks I feel are needed in Author, particularly with the Map, some extensions with Visual- Meta and minor but useful Reader additions. What has become very apparent over the last few months is how hard it has been to envision text in VR.

Historically the introduction of a new substrate took a while to be taken advantage of. This is nothing new. To truly take advantage of a new substrate for, which becomes a new textual medium, nothing can replace actual use and experience to inform thinking and discussion. We are still struggling to use ‘traditional’ digital media to its full. It is no surprise that in the 360, top to bottom, high resolution, powerful computer, high-speed connected virtual environment we are still barely scratching the surface.

For reading, for me, it is about making the experience pleasant. This can be done mostly through tradition typography and layout I think. Although text (in the western tradition at least) is an operation moving the foveal gaze from left to right, this is not what the user has a mental image of, we do not read in the way of a Turing machine. We read with a mental impression of the whole document (however weak or strong) and we read with prior knowledge. We further read using different points of focus on a page, such as paragraph breaks, bolds, and other layouts and so on.

Basic writing, typing–that is to say text entry–is also good today. I really don’t mind what we have today, even the 13” MacBook Pro is pretty great. The way I have polished and polished Author for writing, the font styles, the colours and such, have been polished primarily for my preference. Others have commented and have their opinions implemented, but the software is a testament to what I want for the basics. So yes, this is, to a large extent done, in my opinion (for now).

What I want however, and what I think digital text can afford and XR text can unleash, is truly interactive text with flexible views. This is not a new value or vision, it goes all the way back to my philosophy of ‘Liquid Information’ and the inspiration of Doug Engelbart’s augmentations. Most of what I will describe here can and should be done in traditional digital environments, which is what I have been working on doing with Author and Reader. Hopefully XR will provide enough curiosity to make it happen and enough interest from

then public to make it viable.

State of the my art

A few specific interactions in my software Author and Reader I’d like to highlight of the way it is at the end of 2022 include:

Much word processing and reading is quite stilted in my opinion and this is something I try to address with my software, to make the process flow better, to make it more liquid. I therefore outline some of the interactions currently possible in Author and Reader:

VR gives us a much wider workspace, which can truly help some with editing and seeing connections, both in our own work and in what we reading for research. I think we need to start with the basics, allowing for traditional digital documents to be accessible in VR environments, with as much metadata robustly attached (of course I suggest Visual-Meta as part of the solution to this) and then have the interactions magically grow out of this document as our experience and imagination grows. Similarly, those who can imagine completely new textual worlds should do so, and in dialog we can realise the actual Future of Text.

Making it happen

Much of what I plan to do can be done and should be done in 2D but although I have built some of it, it’s hard to finance more, partly since there is only a limited curiosity among users for different ways to read and write outside the Microsoft Word and Apple Pages paradigm and the Google Docs online method. Of course there are brilliant software out there such as Literature & Latte Scrivener, iA Writer and The Soulmen’s Mac Ulysses. In my experience as a small, independent developer however, it is very hard to break through to actually show people another way, which may or may not be to their taste and style. As I highlight, in several places since I feel it is so crucial, VR gives us an opportunity for renewed curiosity. I hope I can make use of this for my own perspective, my own software, and for the whole community to get to the next level of text augmentation.

Frode Hegland

The case for books

Fabien wrote a piece on the case against books and here is my small piece on the case for

books.

Books, in my view, are intentionally bound collections of pages which are explicitly ‘published’ though not necessarily shared with a wider audience, at a specific time. Books are also self-contained though they rely on explicit and implicit connections to convey meaning.

Explicitly published is important since they are not ‘forever documents’ like a Google Doc or that Word document manuscript you have languishing in your word processor. They are defined as being done, at least for the current version.

The fact that they are published at a specific time marks them in the history of the evolution of ideas and assertions and allow them to be cited and for flexible views to be built.

Robustness

Of course books should be able to come in many formats but a basic format of the book is that it can be self-contained and therefore, with metadata solutions such as Visual-Meta, can contain rich information about the book even it is printed on paper.

Book Bindings

The fact that a books are bound is of significance. When books were only physical, the physical bounding was not something which could be changed unless the spine was cracked or pages photocopied or hand copied.

Digital Bindings

Digital bindings should allow the author/publisher to produce an initial binding but the reader should also quite easily be able to break the book up and further share, or publish, their section of the book (rights pending of course). Their edit of the book into a new binding could be just a single article, a single page or a collection of articles.

If the book is in a series, such as The Future of Text is, then the user should be able to bind it all into one binding, should they wish.

Or combine different sources into a binding, as a teacher might do with photocopies. Further, the user should be able to annotate the bound book as a book ‘DJ’ of sorts,

where people might even subscribe to get that persons’ views of books.

And there you have it. We should not only share information as books or even journals or magazines, but books do have their place and I suspect always will, but their utility will change with what the technologies make possible.

Future Books

There is no reason books need to stay rooted in the past, they can be set free with increasing technological opportunities. We are only just beginning to imagine books which have special characteristics in VR, without being locked into only being readable in VR. We will need to radially rethink what a book is, what a document is, what units of knowledge are, how we share, how we archive and how we interact with books and documents. And we need to keep rethinking this so I am grateful for Fabien for his ‘provocation’.

Frode Hegland

‘Just’ more displays?

At the close of 2022 when the Quest 2 has become quite popular, the Quest Pro has just been released (I’ve used mine for one day so far) and we are all expecting the Apple HMD early next year, a comment is see every once in a while is that XR should be’ more than just more displays’. This is because it is relatively easy it seems to use a HMD as a receiver of a computer’s display information taking over the main display and adding more ‘virtual’ displays when needed. The implication is that this is simply too easy and does not take good advantage of what VR has to offer. As a huge fan of the potential of VR, I disagree. Yes, it might very well be technically easy and yes, the future will bring truly new dimensions to VR, there is no question in my mind. However, let’s not bury what it useful just because it is easy to build–not everything has to be a demonstration of technical prowess.

A key issue is that text is hard to read when it does not have a clear and plain background. This is why text floating as a hologram in sci-fi looks cool but is not practical to work with. When you have a background you in effect have a screen. And that’s ok. It does not have to be a regular sized screen, it could be a magically resizable screen which can go anywhere and be moved anywhere without physical effort. Perhaps most importantly, eye tracking can allow screens to fade away when not needed. This can mean that the user can have the best of a focused writing experience-ore reading experience–but the user can look to the sides and supplemental information appears–without being intrusive.

Displays/floating windows of any size which can be accessed and removed at a glance, is huge.

The thing is, the way screens currently works is that it is the computer which generate extra screens for the HMD to access and display, not the applications. To have instant integration with VR/AR, the windows should be on an application basis or created through Web VR for extra screens on demand. These screens should also be addressable by the host software for display sizing and show/hide (based on eye tracking, gesture or other).

This would allow me as a developer to have my software almost instantly available in VR and AR in a more useful form. Both my Author word processor and my Reader PDF viewer. I would simply add a function to the software to allow for the creation of such extra displays and then voila, the user will have a much more useful workspace in VR.

This indicates that it’s great to have mod displays but with ‘infinite’ scale we can easily surpass human scale and therefore we will need interactions to help us define the view flexibly.

image

Fabien Benetou responds

On the notion of windows by the application: That exists. This is not "just" a potentially good idea anymore : I tried one 3 years ago https://twitter.com/utopiah/status/ 1164059349490249728 and a bit later again with much more demanding content https:// twitter.com/utopiah/status/1261753166321909760

It has been funded by Valve and is open source https://gitlab.freedesktop.org/xrdesktop/ xrdesktop

What's interesting also is to put this back in perspective. This was already implemented in 2014 https://twitter.com/utopiah/status/1560500042963771392 as Motorcar that I discovered. while trying another open source VR window manager https://twitter.com/

utopiah/status/1560607202314174465 , namely SimulaVR https://github.com/SimulaVR/ Simula/

My point here is obviously not to criticize the idea but rather to focus on the gaps of these existing solutions.

These are desktop windows managers for desktop VR. They take existing windows, e.g text editor or video player, and let you organize them in space.

For you to try them you'd need a desktop computer with a relatively powerful GPU running Linux then connect your headset, Quest 2 or Quest Pro, to it.

Frode Hegland responds

Thank you Fabien, this is great to see. If it could be transparently available to desktop software developers for use in VR that would be a huge step. I am happy that it technically works though, we need keep testing and experiencing.

Frode Hegland

Page to Page Navigation

Originally email to group:

There are different issues when reading a document for navigation. One issue is that you simply want to skip to the next heading since you are done with where you are and there are many pages of text before the next heading for you to skip through–judging all of them on the way to see when the next heading appears–to find if the text section is worth reading.

I have made three brief tests using our book as example.

The issue is how to let a user jump around our book in a convenient way.

Frode Hegland


Journal : Academic & Scientific Documents in the Metaverse


Recall the world before it all became digital. You are in a meeting where you have a printout of a relevant document and a notepad. You underline relevant parts of the document, you write notes and draw diagrams in your notepad. You are also given a stack of index cards so that you can all do some brain-storming and those cards are pinned to a wall and moved around as you discuss them as a group. The facilitator even pins a few lines of string between related cards. You take a picture of this and since you don’t need the document you printed out–since the meeting went so well—you fold it into a paper airplane and fly it into the bin.

Now picture yourself in a fully digital environment where you have the same document and notepad and you use systems like Google Docs to collaborate and even a projector or a big screen for the cards to be put up and moved around by the facilitator. This is pretty much the office life many of us live in today. You can’t exactly fly the airplane to the bin, you have given up arbitrary interactions for those which are more useful in a work environment, such as the ability to instantly edit and share your information. Every environment you work in will of course have tradeoffs as to what you can do there.

So let’s go to the near-future and don our AR/VR headgear and enter a meeting in the Metaverse with the same document and a notepad, in richly interactive knowledge room. You will now be able to do magical things, as we can dream about today, and even build demos of:


  • You can spread the document out in and have it float in the air where you want it to.

  • Any included diagrams can be pulled out and enlarged to fill a wall, where you can discuss it and annotate it.

  • Any references from that document can be visualised as lines going into the distance and a tug on any line will bring the source into view.

  • You can throw your virtual index cards straight to a huge wall and you and the facilitator can both move the cards around, as well as save their positions and build sets of layouts.

  • Lines showing different kinds of connections can be made to appear between the cards.

  • If the cards have time information they can also be put on a timeline, if they have geographic information they can be put on a map, even a globe.

  • If there is related information in the document you brought, or in any relevant documents,

    they can be connected to this constellation of knowledge.


    What you can do is only limited by our imagination and the tools provided. And it is also limited by the enabling infrastructures. What you cannot do is leave the room with this knowledge space intact. The actions you can perform on the knowledge elements in the room is entirely predicated by the ‘affordances’ the room gives you, to use a term from psychology which is also used for human-computer-interaction. It is akin to taking a picture from one picture editing program to another program–even though it’s the same picture, you cannot expect to be able to perform the exact same functions–such as special photographic filters. The difference in the metaverse will be that the entire environment will be software, both the visual aspects of the environment and the interactions you will have, and that means it will be owned by someone. Meta owns everything you do in their Quest headsets when in their environments, such as Horizon Workrooms, you cannot perform operations which they have not made possible through programming the space they own.

    Apple and Google will try to own the knowledge spaces they provide as well.

    Consider just a few documents: Currently you cannot fully open a document into a VR space, you can either view your Mac or Windows computer screen or you can have the document as sheets, but let’s skip ahead to when you can indeed open the document and its metadata is available to you. You open a document in the knowledge space and you:


  • Pull the table of contents to one side for easy overview.

  • Throw the glossary into another part of the room.

  • Throw all the sources of the document against a wall.

  • You manipulate the document with interactions even Tom Cruise would have been jealous of in Minority Reporty.

  • You read this new document with the same interactions and decide to see the two documents side by side with similarities highlighted with translucent bands, Ted Nelson style.

  • Then you have a meeting and you have to leave this knowledge room. Your next meeting is in a different type of room developed by a different company but the work you have just done is so relevant to your next meeting so you wish you could take across the work you have done but you cannot. The data for how the information is displayed and what interactions you can do are determined by the room you are in, since that is the software which makes the interactions possible. What we need is to develop open standards for how data, in the

    form of documents but also all other forms of data, can be taken into these environments and for how the resulting views, which is to say arrangements, of this information is stored and handled. How will the stored, how will it be accessible and who will own it? This will be for us to decide, together. Or we can let commerce fence us in.

    Jack Kausch

    Why We Need a Semantic Writing System

    Can there be non-sequential text?

    The Greeks thought Egyptian hieroglyphs were allegorical icons which conveyed pure ideas. This interpretation was passed down to the Renaissance, and combined with misconceptions about Chinese language. In the early modern period, Europeans dreamed of creating a universal pictographic language which, combined with an encyclopedia, would translate all knowledge into every language in the world.

    We now know that Egyptian hieroglyphs are not just pictures. They also convey sound.

    The boundary between pictographic proto-writing and what we consider writing with a grammar is the Rebus Principle, where a picture begins to stand for a sound by a process of visual punning. This was practiced in an extreme form in early Egyptian history, and gave rise to the multi-layered nature of the writing system. The best term to describe writing systems like this is not “logographic” or “ideographic” but the Mandarin 形声 “xíng shēng”, which roughly translates to “phonosemantic.”

    Both Cuneiform and Egyptian have the quality of conveying spoken speech alongside semantic classifier symbols, which disambiguate transcriptions. The convention for how to read Hieroglyphs is not justified against any one direction on the scroll or stela, but follows the rule to read “into the faces of animals” or in the opposite direction that all the characters are looking. Thus hieroglyphs can be read from right to left, left to right, top to bottom, and vice versa, depending on how they are written.

    However: every inscription is still sequential. Even boustrophedon texts from the early Greek period, which reverse direction every line, continue to convey language linearly. The reason for this is that speech, while continuous, is sequential, and text encodes speech. Text takes continuous phonological features and represents them as discrete symbols, yet the content of the representation remains sound-based. There is not, and has never been, a “non- discursive” writing system, like the Greeks once thought about Egyptian hieroglyphs.

    This is not to say that there is not great value in pictographic systems of representation which have no relation to language, such as Emoji. It is just that they are not considered writing because they have no phonological content, and as such they do not represent the grammar of natural language. Birchbark scrolls such as the Ojibwe wiigwasabak or the Mi'kmaq hieroglyphs can convey complex layers of narrative meaning, but their

    interpretation is limited to those already initiated into an oral tradition. What we consider text remains a function of what is speakable.

    We are entering an era that wishes to challenge the linearity of text. The distributed nature of the Web, and the “horizontal” potential of hypertext to link documents together, seems to invite a world in which the sequential nature of the printed book is altered. What this change amounts to is another transformation in documentation. The codex made very different social modes of organization possible from the scroll (indeed, it may have been partly responsible for the rise of Christianity) and printing transformed the relations between individuals and the book. The nature of documents, including how they are stored and disseminated, will now inevitably change.

    There is a limit, however, to how non-sequential we can make text in its own right, for the reasons discussed above. Emoji appear to offer an interesting alternative, yet for all their expressive power, like most pictographic symbol sets, they remain ambiguous. Icons provide an ability to convey certain kinds of information, and even establish natural classes. We encode them with the same standards as text, and they are treated as text-like entities. Yet metaphorical combinations of icons can have many interpretations, and there are too many things in the world to create an icon for every one. There is thus no small inventory of icons which will satisfy the constraint of being able to combine them into every possible concept.

    Our new tools have nearly endless potential for the representation of mathematical, particularly geometric, entities. Text on the other hand is dependent on standards which encode individual characters, and in turn influence how the text is formatted, and what interfaces can be made for users to work with it, i.e., to read and write. This is foreign to our visual interfaces, whether phones or monitors, which, composed of pixels, are ideally suited to displaying graphics and shapes.

    To return now to the European dream of a universal character language from the Enlightenment: where such a writing system is similar to emojis and geometry, it loses many of the characteristics we ascribe to text, because it transcends the limits of language. It is non- sequential, but it is too vague to consistently convey the writer’s intent. Where such a writing system conveys linguistic and grammatical information, it is constrained by the phonological traits of each language, and cannot be said to be “universal.” This is the conventional text we already have.

    The answer is probably somewhere in between, similar to what the Egyptians discovered all those years back during the period between the reign of Mena-Narmer and Djoser. Some combination of sounds and meanings could serve as a mnemonic device to clarify both categories, and potentially integrate well into current speech synthesis technology. If there can be non-sequential text it will be found at the intersection of the visual image, geometry, well-formed semantic logic, and phonological natural language.

    Jad Esber

    Monthly Guest Presentation : 21 February 2022

    Video: https://youtu.be/i_dZmp59wGk?t=513

    Jad Esber: Today I’ll be talking a little bit about both, sort of, algorithmic, and human curation. I’ll be using a lot of metaphors, as a poet that’s how I tend to explain things. The presentation won’t take very long, and I hope to have a longer discussion.

    On today’s internet, algorithms have taken on the role of taste-making, but also the authoritative role of gatekeeping through the anonymous spotlighting of specific content. If you take the example of music, streaming services have given us access to infinite amounts of music. There are around 40,000 songs uploaded on Spotify every single day. And given the amount of music circulating on the internet, and how it’s increasing all the time, the need for compression of cultural data and the ability to find the essence of things becomes more focal than ever. And because automated systems have taken on that role of taste-making, they have a profound effect on the social and cultural value of music, if we take the example of music. And so, it ends up influencing people’s impressions and opinions towards what kind of music is considered valuable or desirable or not.

    If you think of it from an artist’s perspective, despite platforms subverting the power of labels, who are our previous gatekeepers and taste-makers, and claiming to level the playing field, they’re creating new power structures. With algorithms and editorial teams controlling what playlists we listen to, to the point where artists are so obsessed with playlist placement, that it’s dictating what music they create. So if you listen to the next few new songs that you hear on a streaming service, you might observe that they’ll start with a chorus, they’ll be really loud, they’ll be dynamic, and that’s because they’re optimising for the input signals of algorithms and for playlist placement. And this is even more pronounced on platforms like TikTok, which essentially strip away all forms of human curation. And I would hypothesise that, if Amy Winehouse released Back in Black today, it wouldn’t perform very well because of its pacing, the undynamic melody. It wouldn’t have pleased the algorithms. It wouldn’t have sold the over 40 million copies that it did.

    And another issue with algorithms is churning standardised recommendations that are flattening individual tastes, they’re encouraging conformity and stripping listeners of social interaction. We’re all essentially listening to the same songs.

    There are actually millions of songs, on ‘Spotify’, that have been played only partially, or never at all. And there’s a service, which is kind of tongue-in-cheek, but it’s called ‘Forgotify’, that exists to give the neglected songs another way to reach you. So if you know are looking for a song that’s never been played, or hardly been played, you can go to ‘Forgotify’ to listen to it. So, the answer isn’t that we should eliminate algorithms or machine curation. We actually really need machine and programmatic algorithms to scale, but we also need humans to make it real. So, it’s not one or the other. If we solely rely on algorithms to understand the contextual knowledge around, let’s say, music, that’ll be impossible. Because, at present, human effort, popularity bias, which means only recommending popular stuff, and the cold start problem is unavoidable with music recommendation, even with very advanced hybrid collaborative filtering models that Spotify implies. So pairing algorithmic discovery with human curation will remain the only option. And with human curation allowing for the recalibration of recommendation through contextual reasoning and sensitivity, qualities that only humans really can do. Today this has caused the formation of new power structures that place the careers of merging artists, let’s say on Spotify, in the hands of a very small set of curators that live at the major streaming platform.

    Spotify actually has an editorial team of humans that adds context around algorithms and curates playlists. So they’re very powerful. But as a society, you continuously look to others, to both validate specific tastes, and to inspire us with new tastes. If I were to ask you how you came up discovered a new article or a new song, it’s likely that you have heard of it from someone you trust.

    People have looked to tastemakers to provide recommendations continuously. But part of the problem is that curation still remains an invisible labour. There aren’t really incentive structures that allow curators to truly thrive. And it’s something that a lot of blockchain advocates, people who believe in Web3, think that there is an opportunity for that to change with this new tech. But beyond this, there is also a really big need for a design system that allows for human-centred discovery. A lot of people have tried, but nothing has really emerged.

    I wanted to use a metaphor and sort of explore what bookshelves represent as a potential example of an alternative design system for discovery, human-curated discovery. So, let’s imagine the last time you visited the bookstore. The last time I visited the bookstore, I might have gone in to search for a specific book. Perhaps it was to seek inspiration for another read. I didn’t know what book I wanted to buy. Or maybe, like me, you went into the bookstore for the vibes, because the aesthetic is really cool, and being in that space signals something to people. This book store over here is one I used to frequent in London. I loved just going to hang out there because it was awesome, and I wanted to be seen there. But

    similarly, when I go and visit someone’s house, I’m always on the lookout for what’s on their bookshelf, to see what they’re reading. That’s especially the case for someone I really admire or want to get to know better. And by looking at their bookshelf, I get a sense of what they’re interested in, who they are. But it also allows for a certain level of connection with the individual that’s curating the books. They provide a level of context and trust that the things on their bookshelves are things that I might be interested in. And I’d love to, for example, know what’s on Frode’s bookshelf right now. But there’s also something really intimate about browsing someone’s bookshelf, which is essentially a public display of what they’re consuming or looking to consume. So, if there’s a book you’ve read, or want to read, it immediately triggers common ground. It triggers a sense of connection with that individual. Perhaps it’s a conversation. I was browsing Frode’s bookshelf and I came across a book that I was interested in, perhaps, I start a conversation around it. So, along with discovery, the act of going through someone’s bookshelf, allows for that context, for connection, and then, the borrowing of the book creates a new level of context. I might borrow the book and kind of have the opportunity to read through it, live through it, and then go back and have another conversation with the person that I borrowed it from. And so recommending a book to a friend is one thing, but sharing a copy of that book, in which maybe you’ve annotated the text that stands out to you, or highlighted key parts of paragraphs, that’s an entirely new dimension of connection. What stood out to you versus what stood out to them. And it’s really important to remember that people connect with people at the end of the day and not just with content. Beyond the books on display, the range of authors matters. And even the effort to source the books matters. Perhaps it’s an early edition of a book. Or you had to wait in line for hours to get an autographed copy from that author.

    That level of effort, or the proof of work to kind of source that book, also signals how intense my fanship is, or how important this book is to me.

    And all that context is really important. And what’s really interesting is also that the bookshelf is a record of who I was, and also who I want to be. And I really love this quote from Inga Chen, she says, “What books people buy are stronger signals of what topics are important to people, or perhaps what topics are aspirationally important, important enough to buy a book that will take hours to read or that will sit on their shelf and signal something about them.” If we compare that to some platforms, like Pinterest for example. Pinterest exists to not just curate what you’re interested in right now, but what’s aspirationally interesting to you. It’s the wedding dresses that you want to buy or the furniture that you want to purchase. So there’s this level of, who you want to become, as well, that’s spoken to through that curation of books, that lives on your bookshelf.

    I wanted to come back and connect this with where we’re at with the internet today and

    this new realm of ownership and people are calling social objects. And so, if we take this metaphor of a bookshelf and apply it to any other space that houses cultural artefacts, the term people have been using for these cultural artefacts is social objects. We can think of, beyond books, the shirts we wear, the posters we put on our walls, the souvenirs we pick up, they’re all, essentially, social objects. And they showcase what we care about and the communities that we belong to. And, at their core, these social objects act as a shorthand to tell people about who we are. They are like beacons that send out the signal for like-minded people to find us. If I’m wearing a band shirt, then other fans of that artist, that band will, perhaps, want to connect with me. On the internet, these social objects take the form of URLs, of JPEGs, articles, songs, videos, and there are platforms like Pinterest, or Goodreads, or Spotify, and countless others that centre around some level of human-curated discovery, and community around these social objects. But what’s really missing from our digital experience today is this aspect of ownership that’s rooted in the physicality of the books on your bookshelves. We might turn to digital platforms as sources of discovery and inspiration, but until now we haven’t really been able to attach our identities to the content we consume, in a similar way that we do to physical owned goods. And part of that is the public histories that exist around the owned objects that we have, in the context that isn’t really provided in the limited UIs that a lot of our devices allow us to convey. So, a lot of what’s happening today around blockchains is focused on how can we track provenance or try to verify that someone was the first to something, and how do we, in a way, track a meme through its evolution. And there are elements of context that are provided through that sort of tech, although limited.

    There is discussion around ownership as well. Like, who owns what, but also portability. The fact that I am able to take the things that I own with me from one space to another, which means that I’m no longer leaving fragments of my identity siloed in these different spaces, but there’s a sense of personhood. And so these questions of physical ownership are starting to enter the digital realm. And we’re at an interesting time right now, where a lot of, I think, design systems will start to pop up, that emulate a lot of what it feels like to work, to walk into a bookstore, or to browse someone’s bookshelf. And so, I wanted to leave us with that open question, and that provocation, and transition to more of a discussion. That was everything that I had to present.

    So, I will pause there and pass it back to Frode, and perhaps we can just have a discussion from now on. Thank you for listening.

    Dialogue

    https://youtu.be/i_dZmp59wGk?t=1329

    Frode Hegland: Thank you very much. That was interesting and provocative. Very good for this group. I can see lots of heads are wobbling, and it means there’s a lot of thinking. But since I have the mic I will do the first question, and that is:

    Coming from academia, one thing that I’m wondering what you think and I’m also wondering what the academics in the room might think. References, as bookshelf, or references as showing who you are, basically trying to cram things in there to show, not necessarily support your argument, but support your identity, do you have any comments on that?

    Jad Esber: So, I think that’s a really interesting thought. When I was thinking of bookshelves, they do serve almost like references, because of the thoughts and the insights that you share. If you’re sitting in the bedroom, in the living room, and you’re sharing some thoughts, perhaps you’re having a political conversation, and you point at the book on your shelf that perhaps you read, that’s like, “Hey, this thought that I’m sharing, the reference is right there.” It sort of does add, or kind of provide a baseline level of trust that this insight or thought has been memorialised in this book that someone chose to publish, and it lives on my bookshelf. There is some level of credibility that’s built by attaching your insider thoughts to that credible source. So, yeah, there’s definitely a tie between references, I guess, in citations to the physical setting of having a conversation and a book living on your bookshelf, that you point to. I think that’s an interesting connection beyond just existing as social objects that speak to your identity, as well. That’s another extension as well. I think that’s really interesting.

    Frode Hegland: Thanks for that. Bob. But afterward, Fabien, if you could elaborate on your comment in the chat, that would be really great. Bob, please.

    Video: https://youtu.be/i_dZmp59wGk?t=1460 Bob Horn: Well, the first thing that comes to mind is:

    Have you looked at three-dimensional spaces on the internet? For example, Second Life, and what do you think about that?

    Jad Esber: Yeah. I mean, part of what people are proposing for the future of the internet is what I’m sure you guys have discussed in past sessions. Perhaps is like the metaverse, right? Which is essentially this idea of co-presence, and some level of physicality bridging the gap between being co-presented in a physical space, in a digital space. Second Life was a very

    early example of some version of this. I haven’t spent too many iterations thinking about virtual spaces and whether they are apt at emulating the feeling of walking into a bookstore, or leafing through a bookshelf. But I think if you think about the sensory experience of being able to browse someone’s bookshelf, there are, obviously, parallels to the visual sensory experience. You can browse someone’s digital library. Perhaps there’s some level of tactile, you can pick up books, but it’s not really the same. But it’s missing a lot of the other sensory experiences, which provide a level of context. But certainly, allow for that serendipitous discovery that another doesn’t. Like the feed dynamic isn’t necessarily the most serendipitous. It’s it is to a degree, but it’s also very crafted. And it there isn’t really a level of play when you’re going around and looking at things that you do on a bookshelf, or in a bookstore. And so, Second Life does allow for that. Moving around, picking things up and exploring that you do in the physical world. So, I think it’s definitely bridging the gap to an extent, but missing a lot of the sensory experiences that we have in the physical world. I think we haven’t quite thought about how to bridge that gap. I know there are projects that are trying to make our experience of digital worlds more sensory, but I’m not quite sure how close we’ll get. So, that’s my initial thought, but feel free to jump in, by the way, I’d welcome other opinions and perspectives as well.

    Bob Horn: We’ve been discussing this a little bit, partially, at my initiative, and mostly at Frode’s urging us on. And I haven’t been in Second Life for, I don’t know, six, or seven, or eight years. But I have a friend who has, who’s there all the time, and says that there are people who have their personal libraries there. That there are university libraries. Their whole geographies, I’m told, of libraries. So, it may be an interesting angle, at some point. And if you do, I’d be interested, of course, in what you came up with.

    Jad Esber: Totally. Thank you for that pointer, yeah. There’s a multitude of projects right now that focus on extending Second Life, and kind of bringing in concepts around ownership, and physicality, and interoperability, so that the things that you own in Second Life, you can take with you, from that world, into others. Which, sort of, does bridge the gap between the physical world and the digital, because it doesn’t live within that siloed space, but actually is associated to you, and can be taken from one space to another. Very early in building that out, but that’s a big promise of Web3, so. There’re a lot of hands. So, I’ll pause there.

    Frode Hegland: Yeah, Fabien, if you could elaborate on what you were talking about,

    virtual bookshelf.

    Fabien Benetou: Yep. Well, actually it will be easier if I’ll share my screen. I don’t know if you can see. I have a Wiki that I’ve been maintaining for 10 plus years. And on top, you can

    see the visualisation of the edits when I started for this specific page. And these pages, as I was saying in the chat, are sadly out of date, that’s been 10 years, actually, just for this page. But I was listing the different books I’ve read, with the date, what page I was. And if I take a random book, I have my notes, the (indistinct), and then the list of books that are related, let’s say, to the book. I don’t have it in VR or in 3D yet, but it’s definitely from that point wouldn’t be too hard, so... And I was thinking, I have personally a, kind of, (indistinct) that they’re hidden, but I have some books there and I have a white wall there and I love both because when I bring back if either I’m in someone else’s room or my own room. Usually, if I’m in my own room, I’m excited by the book I’ve read or the one that I haven’t read yet. So it brings a lot of excitement. But also, if I have a goal in mind, a task at hand, let’s say, a presentation on Thursday, a thing that I haven’t finished yet, then it pulls me to something else. Whereas if I have the white wall it’s like a blank slate. And again, if I need to pull some references on books and whatnot. So, I always have that tension. And what usually happens is, when I go in a physical bookstore, or library, or bookshop, or friends, serendipity is indeed, it’s not the book I came here for, it’s the one next to it. Because I’m not able to make the link, and usually, if the creation has been done right, and arguably the algorithm, if it’s not actually computational, let’s say, if you use the doing annotation or any other basically annotation system, in order to sort the books or their references, then there should be some connection that were not obvious in the first place. So, to me, that’s the most, I’d say, exciting aspect of that.

    Jad Esber: This is amazing, by the way, Fabien. This is incredible that you’ve built this over a decade, that’s so cool. I think what’s also really interesting to extend on that thought, and just to kind of like, “yes” and that, there is a certain level of, I mean, I think what you’ve built is very utilitarian, but also the existence of the bookshelf as an expression of identity, I think is interesting. So, beyond just organising the books, and keeping them, storing them in a utilitarian way, then serving as signals of your identity, I think are really interesting. And so, I think a lot of platforms today cater to the utility. If you think about Pocket or even Goodreads to an extent, there is potentially an identity angle to Goodreads versus Tumblr, back in the day, or Myspace or (indistinct) which were much more identity-focused. So there is this distinction of utilitarian, organising, keeping things, annotating, etc. for yourself. But there’s also this identity element of like, by curating I am expressing my identity. And I think that’s also really interesting.

    Frode Hegland: Brandel, you’re next. But just wanted to highlight today to the new people in the room including you, Jad. This community, at the moment, is really leaning towards AR and VR. But in a couple of years’ time, what can happen? And that also includes projections and all kinds of different things, so we really are thinking connected with the physical, but

    also virtual on top. Brandel, please.

    Brandel Zachernuk: So, I was really hooked on when you said that you like to be seen in that London bookstore. And it made me think about the fact that on Spotify, on YouTube, on Goodreads for the most part, we’re not seen at all, unless we’re on the specific, explicit page that is there for the purposes of representing us. So, YouTube does have a profile page. But nothing about the rest of our onward activity actually is represented within the context of that. If you compare that to being in the bookstore, you have your clothes on, you have your demeanour, and you can see the other participants. There’s a mutuality to being present in it, where you get to see that, rather than merely that a like button maybe is going up in real-time. And so, I’m wondering what kind of projective representation do you feel we need within the broader Web? Because even making a new curation page still silos that representation with an explicit place, and doesn’t give you the persistent reference that is your own physicality, and body wandering around the various places that you want to be at and be seeing at. Now, do you see that as something that there’s a solve to? Or how do you think about that?

    Jad Esber: Yeah, I think Bob alluded to this to a degree with Second Life. And the example of Second Life, I think the promise of co-presence in the digital world is really interesting, and potentially could solve for this, part of. I also go to cafes, not just because I like the coffee, because I like the aesthetic, and the opportunities to rub shoulders with other clientele that might be interesting, because this cafe is frequented by this sort of folk. And that doesn’t exist online as much. I mean, perhaps, if you’re going to a forum, and you frequent a specific subreddit, there is an element of like, “Oh, I’ll meet these types of folks or this chat group, and perhaps, I’ll be able to converse with these types of folks and be seen here.” But I think, how long you spend there, how you show up there, beyond just what you write. That all matters. And how you’re browsing, there’s a lot of elements that are really lost in current user interfaces. So, I think, yeah, Second Life-like spaces might solve for that, and allow us to present other parts of ourselves in these spaces, and measure time spent, and how we’re presenting, and what we’re bringing. But, yeah. I’m also fascinated by this idea of just existing in a space as a signal for who you are. And yeah, I also love that metaphor. And again, this is all stuff that I’m actively thinking about and would love sort of any additional insights, if anyone has thoughts on this, please do share, as well. This is, by no means, just a monologue from my direction.

    Frode Hegland: Oh, I think you’re going to get a lot of perspectives. and I will move into... We’re very lucky to have Dene here, who’s been working with electronic literature. I will let her speak for herself, but what they’re doing is just phenomenally important work.

    Dene Grigar: Thank you. That’s a nice introduction. I am the managing director, one of the founders, and the curator of The NEXT. And The NEXT is a virtual museum, slash library,

    slash preservation space that contains, right now, 34 collections of about 3,000 works of

    born-digital art and expressive writing. What we generally call ‘electronic literature’. But I’ve unpacked that word a little bit for you. And I think this corresponds to a little bit of what you’e talking about in that when we cut when I collect when I curate work I’m not picking particular works to go in The NEXT, I’m taking full collections. So, artists turn over their entire collections to us, and then that becomes part of The NEXT collections. So it’s been interesting watching what artists collect. So it’s not just their own works, it’s the works of other artists. And the interesting, historical, cultural aspect of it is to see, in particular time frames, artists before the advent of the browser, for example, what they collected, and who they were collecting. Michael Joyce, Stuart Moulthrop, Voyager, stuff like that. Then the Web, the browser, and the net art period, and the rise of Flash, looking to see that I have five copies of Firefly by Nina Larson because people were collecting that work. Jason Nelson’s work. A lot of his games are very popular. So it’s been interesting to watch this kind of triangulation of what becomes popular, and then the search engine that we built pulls that up. It lets you see that, “Oh, there’s five copies of this. There’s three copies of that. Oh, there’s seven versions of Michael Joyce’s afternoon, a story.” To see what’s been so important that there’s even been updates, so that it stays alive over the course of 30 years. One other thing I’ll mention, back to your early comment, I have a whole print book library in my house, despite the fact I was in a flood in 1975 and lost everything I owned, I rebuilt my library and I have something like 5,000 volumes of books, I collect books. But it’s always interesting for me, to have guests at my house and they never look at my bookshelf. And the first thing I do when I go to someone’s house, I see books is like, “Oh, what are you reading? What do you collect?” And so, looking at having The NEXT and all that 3,000 works of art and then my bookshelf, and realising that people really aren’t looking and thinking about what this means. The identity for the field, my own personal taste, I call it my own personal taste, which is very diverse. So, I think there’s a lot to be said about people’s interest in this. And I think it’s that kind of intellectual laziness that drives people to just allow themselves to be swept away by algorithms, and not intervene on their own and take ownership over what they’re consuming. And I’ll leave it at that. Thank you.

    Jad Esber: Yeah, I love that. Thank you for sharing. And that’s a fascinating project, as well. I’d love to dig in further. I think you bring up a really good point around shared interests being really key and connecting the right type of folks, who are interested in exploring each others libraries. Because not everyone that comes into my house is interested in the books that I’m reading, because, perhaps, they’re from a different field, they’re just not as curious about the same fields. But there is a huge amount of people that potentially are. I mean, within this group, we’re all interested in similar things. And we found each other through the

    internet. And so, there is this element of, what if the people walking into your library, Dene, are also folks that share the same interests as you? That would actively look and browse through your library and are deeply interested in the topics that you’re interested in so there is something to be said around how can we make sure that people that are interested in the same things are walking into each others’ spaces? And the interest-based graphs exist on the Web. Thinking about who is interested in what, and how can we go into each others’ spaces. And browse, or collecting, or curating, or creating is a part of what many algorithms try to do, for better or for worse. But sometimes leave us in echo chambers, right? And we’re in one neighbourhood and can’t leave, and that’s part of the problem. But yeah, there is something to be said about that. And I think just to go back to the earlier comment that the Dene made around the inspirations behind artists’ work. I would love to be able to explore what inspired my favourite artist’s music, and what went into it and go back and listen to that. And I think, part of again, Web3’s promise is this idea of provenance, seeing how things have evolved and how they’ve become. And crediting everyone in that lineage. So, if I borrowed from Dene’s work, and I built on it, and that was part of what inspired me, then she should get some credit. And that idea of provenance, and lineage, and giving credit back, and building incentive systems that allow people to build works that will inspire others to continue to build on top of my work is a really interesting proposal for the future of the internet. And so, I just wanted to share that as well.

    Frode Hegland: That’s great. Anything back from you, Dene, on that? Before we move to Mark?

    Dene Grigar: Well, I think provenance is really important. And what I do in my own lab is to establish provenance. Even if you go to The NEXT and you look at the works, it’ll say where we got the work from, who gave it to us, the date they gave it to us, and if there’s some other story that goes with it. For example, I just received a donation from a woman whose daughter went to Brown University and studied under Coover, Robert Coover. And she gave me a copy of some of the early hypertext works, and one was Michael Joyce’s Afternoon Story and it was signed. The little floppy disk was signed, on the label, by Michael and she said, “I didn’t notice there was a signature. I don’t know why there’d be a signature on it.” And, of course, the answer is, if you know anything about the history is that Joyce and Coover were friends, there’s this whole line of this relationship and Coover was the first to review Michael Joyce, and made him famous in the New York times, in 1992. So, I told her that story, and she’s like, Oh, my god. I didn’t know that.” So, just having that story for future generations to understand the relationships, and how ideas and taste evolve over time, and who were the movers and shakers behind some of that interest, so. Thank you. https://the- next.eliterature.org/.

    Frode Hegland: Dene, this is really grist for the mill of a lot of what we’re talking about here. Because, with Jad’s notions of identity sharing via the media we consume, and a lot of the visualisations we’re looking at in VR. One of the things we’ve talked about over the last few weeks is guided tours of work where you could see the hands of the author or somebody pointing out things whether it’s a mural, or a book, or whatever. And then, to be able to find a way to have the meta-information you just talked about, be able to enter the room, maybe it could be simply recorded as you saying it, and that is tagged to be attached to these works. Many wonderful layers, I could go on forever. And I expect mark will follow up.

    Mark Anderson: Hi. I just think, they’re really reflections, more than anything else. Because one of the things that really brought me up was this idea of books being a performative thing, which I still can’t get my head around. It’s not something I’ve encountered, and I don’t see it reflected in the world in which I live. So maybe a generational drift in things. For instance, behind me you might guess, I suppose, I’m a programmer. Actually what that shows is it’s me trying to understand how things work, and I need them that close to my computer. My library is scattered across the house, mainly to distribute weight through a rather old crumbly Victorian house. So, I have to be careful where we put the bookcases. I’m just, really reflecting how totally alien I find the notion of books, I certainly don’t have... I struggled to think of, I never placed a book with the intention it’ll be seen in that position by somebody else. And this is sort of not a pushback, it’s just my reflection on what I’m hearing. Because I find it very interesting because it had never occurred to me. I never, ever thought of it in those terms. The other sad thing about that means that, so, are the books merely performative? Or the content is there? I mean, one of the interesting thing I’ve been trying to do in this group is trying to find ways just to share the list of the books that are on my shelf. Not because they are any reflection of myself, but literally, I actually have some books that are quite hard to find, and people might want to know that it was possible to find a copy. And whether they need to come and physically see it, or we could scan something. The point is, “No, I have these. This is a place you can find this book.” And it’s interesting that that’s actually really hard to do. Most systems don’t help because, I mean, the tragedy of recommender systems is they make us so inward-looking. So, instead of actually rewarding our curiosity, or making us look across our divides, they basically say, “Right. You lot are a bunch. You go stand over there.” Job done, and (the) recommender system moves on to categorising the next thing. So, if I try to read outside my normal purview, and I’m constantly reflecting on the fact that the recommended system is one step behind saying, “Oh, right.

    You’re now interested in…” No, I’m not. I’m trying to learn a bit about it. But certainly, this

    is not my area of interest in the sense that I now want to be amidst lots of people who like this. I’m interested in people who are interested by it, but I think those are two very different things. So, I don’t know the answers, but I just raise those, I suppose, as provocations.

    Because that’s something that, at the moment, our systems are really bad at allowing us to share content other than as a sort of humblebrag. Or, in your beautifully curated life on Pinterest, or whatever. Anyway, I’ll stop there.

    Jad Esber: Yeah, thank you for sharing that. I think it does exist on a spectrum, the identity expressive versus utilitarian need that it solves. But if you take the example of clothing, that might help it a little bit more. So, if we’re wearing a t-shirt, perhaps there’s a utilitarian need, but there is also a performative, or identity expressive need that it solves the way we dress, speaks to who we are as well. So I think the notion of a social object being identity expressive, I think is what I was trying to convey. Think, if you think about magazines on a coffee table. Or you think about the art books that live scattered around your living room, perhaps. That is trying to signal something about yourself. The magazines we read as well. If I’m reading Vogue, I’m trying to say something about who I am, and what I’m interested in reading. The Times, or The Guardian, or another newspaper is also very identity expressive. And taking it out on the train and making sure people see what I’m reading is also identity expressive. So, I think that everything sort of around what we consume and what we wear and what we identify with being a signal of who we are. It’s what I was trying to convey there. But I think you make a very good point. The books next to your computer are there because they’re within reach. You’re writing a paper about something and it’s right there. And so, there is a utilitarian need for the way you organise your bookshelf.

    The way you organise your bookshelf can be identity expressive or utilitarian. I’ll give you another example. On my bookshelf, I have a few books that are turned face forward, and a few that I don’t really want people to see them, because I’m not really that proud of them. And I have a book that’s signed by the author, I’ll make sure it’s really easy for people to open it and see the signature. And so, there is an identity expressive element to the way I organise my bookshelves as well that’s not just utilitarian. So, I think another point to illustrate that angle.

    Mark Anderson: To pull us back to our, and as a sub-focus on AR, VR, it just occurred to me it’s something that, the (indistinct) reminder that Dene was talking about, people don’t look at the bookshelves. I’m thinking, yeah and certainly not saying I miss, and it happens less frequently that the evening ends up with a dinner table just loaded with piles of books that have been retrieved from all over the house and are actually part of the conversation that’s going on. And one thing that some of our new tools would be nice to help us recreate that, especially maybe, if we’re not meeting in the same physical space, is to have that

    element of recall of these artefacts, or at least some of the pertinent parts of the content they’re in. It would be really useful to have because the fact that you bothered to walk up two flights of stairs or something to go and get some book off the top shelf, because that’s, in a sense, part of the conversation going on, I think is quite interesting and something we’ve sort of lost anyway. I’ll let it carry on.

    Frode Hegland: It’s interesting to hear what you say there, Mark, because in the calls we have, you’re the one who most often will, “Look, the book arrived. Look, I have this copy now.” And then we all get really annoyed at you because we have to buy the same damn book. So, I think we’re talking about different ways and to different audiences, not necessarily to dinner guests. But for your community of this thing, you’re very happy to share. Which is interesting it’s also two points, to use my hand in the air here. One of them is, clothing came up as well. And some kind of study, I read showed that, we don’t buy clothing we like, we buy clothing that is the kind of clothing we expect people like us to buy. So, even somebody who is really, “I don’t care about fashion” is making a very strong fashion statement. They’re saying they don’t care. Which is anti-snobbery, maybe. You could say that I’m wondering how that enters into this. But also, when we talk about curation, it’s so fascinating how, in this discussion, music and books are almost interchangeable from this particular aspect. And what I found is, I don’t subscribe to Spotify, I never have, because I didn’t like the way the songs were mixed. But what I do really like, and I find amazing, is YouTube mixes. I pay for YouTube premium so I don’t have the ads. That means I’ll have an hour, an hour and a half, maybe two-hour mixes by DJs who really represent my taste. Which is a fantastic new thing. We didn’t have that opportunity before. So that is a few people. And there, the YouTube algorithm tends to put me in direction of something similar. But also this is for music when I work. It’s not for finding new interesting Jazz. When I play this music, when I’m out driving with my family, I hear how incredibly inane and boring it is. It is designed for backgrounding. So the question then becomes, maybe, do we want to have different shelves? Different bookshelves for different aspects of our lives? And then we’re moving back into the virtuality of it all. That was my hand up. Mark, is your hand up for a new point? Okay, Fabien?

    Fabien Benetou: Yeah a couple of points. The first to me, the dearest to me, let’s say, is the provenance aspect. I’m really pissed or annoyed when people don’t cite sources. I would have a normal conversation about a recipe or anything completely casual, doesn’t have to be academic, and if that person didn’t invent it themselves, I’m annoyed if there is not some way for me to look back to where it came from. And I think, honestly, a lot of the energy we waste as a species comes from that. If you’re not aware, of course, of the source, you can’t cite it.

    But if you learn it from somewhere not doing that work, I think is really detrimental. Because

    we don’t have to have the same thought twice if we don’t want to. And if we just have it again, it’s just such a waste of resources. And especially since I’m not a physician, and I don’t specialise in memory, but from what I understood, source memory is the type of memory where you recall, not the information, but where you got it from. And apparently, it’s one of the most demanding. So for example, you learn about, let’s say, a book, and you know somebody told you about that book, and that’s going to be much harder but eventually, if you don’t remember the book itself, but the person who told you about it, you can find it back.

    So, basically, if as a species, we have such a hard time providing sources and understanding where something comes from, I think it’s really terrible. It does piss me off, to be honest. And I don’t know if metadata, in general, is an answer. If having some properly formatted, any kind of representation of it, I’m not going to remember the ISBN of the book, on the top of my head in a conversation, but I’m wondering in terms of, let’s say if blockchain can solve that? Can Web3 solve it? Especially you mentioned the, let’s say, a chain of value. If you have a source or the reference of somewhere else whose work you’re using, it is fair to reattribute it back. They were part of how you came to produce something new. So, I’m quite curious about where this is going to be.

    Jad Esber: Yes, thank you for that question. And, yeah. I think there are a few points. First is, I’m going to just comment really quickly on this idea of provenance. And I want to just jump back to answer some of Frode’s comments, as well. But I think, one thing that you highlighted, Fabien, is how hard it is for us to remember where we learned something or got something. And part of the problem is that, so much of citing and sourcing is so proactive and requires human effort. And if things were designed where it was just built into the process.

    One of the projects I worked on at YouTube was a way for creators to take existing videos and build on them. So, remixing essentially. And in the process of creating content, I’d have to take a snippet and build on it. And that is built into the creation process. The provenance, the citing are very natural to how I’m creating content. TikTok is really good at this too. And so I wonder if there are, again, design systems that allow us to build in provenance and make it really user-friendly and intuitive to remove the friction around having to remember the source and cite. We’re lazy creatures. We want that to be part of our flow. TikTok duets feature and stitching is brilliant. It builds in provenance into the flow. And so, that’s just one thought. In terms of how blockchains help. So, part of what is a blockchain other than a public record of who owns what, and how things are being transacted. If there was a way if we go back to TikTok stitching, or YouTube quoting a specific part of a video, and building on it, if that chain of events was tracked and publicly accessible, and there was a way for me to pass value down that chain to everyone that contributed to this new creative work, that that would be really cool. And that’s part of the promise. This idea of keeping track of how

    everything is moving, and being able to then distribute value in an automated way. So, that’s sort of addressing that point. And then really quickly on, Frode, your earlier comments, and perhaps tying in with some of what we talked about with Mark, around identity expression. I think this all comes back to the human need to be heard, and understood, and seen, and there are phases in our life, where we’re figuring out who we are, and we don’t really have our identities figured out yet. So, if you think about a lot of teenagers, they will have posters on their walls to express what they’re consuming or who they’re interested in. And they are figuring out who they are. And part of them figuring out who they are is talking about what they’re consuming, and through what they’re consuming, they’re figuring out their identities. I grew up writing poetry on the internet because I was trying to express my experiences, and figure out who I was. And so, I think what I’m trying to say is that there will be periods of our life where the need to be seen, heard, understood or we’re figuring out, and forming our identities are a bigger need. And so, the identity expressive element of para-socially expressing or consuming plays a bigger part. And then, perhaps when we’re more settled with our identity, and we’re not really looking to perform that, becomes more of a background thing. Although, it doesn’t completely disappear because we are always looking to be heard, seen, and understood. That’s very human. So, I’ll pause there. I can keep going, but I’ll pause because I see there are a few other hands.

    Frode Hegland: Yeah, I’ll give the torch to Dave Millard. But just on that identity, I have a four-and-a-half-year-old boy, Edgar, who is wonderful. And he currently likes sword fighting and the colour pink. He is very feminine, very masculine, very mixed up, as he should be. So, it’s interesting, from a parental, rather than from just an old man’s perspective to think about the shaping of identity, and putting our posters and so on. It’s so easy to think about life from the point we are in life, and you’re pointing to a teenage part, which none of us are in. So, I really appreciate that being brought into the conversation. Mr. Millard?

    David Millard: Yeah, thanks, Frode. Hi, everyone. Sorry, I joined a few minutes late, so I missed the introductions at the beginning. But, yeah. Thank you. It’s a really interesting talk. One of the things we haven’t talked about is kind of the opposite of performative expression, which is privacy. One of the things, a bit like Mark, I’ve kind of learned about myself listening to everyone’s talking about this, is how deeply introverted I am, and how I really don’t want to let anybody know about me, thank you very much, unless I really want them to. This might be because I teach social network and media analytics to our computer scientists. So, one of the things I teach them about is inference, for example, profiling, I’m reminded of the very early Facebook studies done in the 2000s, about the predictive power of keywords. So, you’d express your interests through a series of keywords. And those researchers were able to achieve 90% accuracy on things like sexuality. This is an American study, so

    republican, democratic preferences. Afro-American, Caucasian, these kinds of things. So I do wonder whether or not there’s a whole element to this, which is subversive or exists in that commercial realm that we ought to think about. I’m also struck about that last comment, actually, that you mentioned, which was about people finding their identities. Because I’ve also been involved in some research looking at how kids use social media. And one of the interesting things about the way that children use social media, including some children that shouldn’t be using social media, because they’re pretty 13 or whatever the cut-off date is. Is that they don’t use it in a very sophisticated way. And we were trying to find out why that was because we all have this impression of children as being naturally able. There’s the myth of the digital native and all that kind of stuff. And it’s precisely because of this identity construction. That was one of the things that came out in our research. So, kids won’t expose themselves to the network, because they’re worried about their self-presentation. They’re much more self-conscious than adults are. So they invest in dyadic relationships. Close friendships, direct messaging, rather than broadcasting identity. So I think there’s an opposite side to this. And it may well be that, for some people, this performative aspect is particularly important. But for other people, this performative aspect is actually quite frightening, or off- putting, or just not very natural. And I just thought I wanted to throw that into the mix. I thought it was an interesting counter observation.

    Jad Esber: Absolutely. Thank you for sharing that. To reflect on my experience growing up writing online. I wrote poetry, not because I wanted other people to read, it was actually very much for myself. And I did it anonymously. I wasn’t looking for any kind of building of credibility or anything like that. It was for me a form of healing. It was for me a form of just figuring out who I was. But if someone did read my poetry, and it did resonate with them, and they did connect with me, then I welcomed that. So, it wasn’t necessarily a performative thing. But it was a way for me to do something for myself that, if it connected with someone else, that was welcomed. I think to go back to the physical metaphor of a bookshelf. Part of my bookshelf will have books that I’ll present, and have upfront and want everyone to see, but I also have a book box with trinkets that are out of sight and are just for me. And that perhaps there are people who will come into my space and I’ll show them what’s in that box, selectively. And I’ll pull them out, and kind of walk them through the trinkets. And then, I’ll have some that are private, and are not for anyone else. So, I totally agree. If we think about digital spaces, if we were to emulate a bookshelf online, there will be elements, perhaps, that I would want to present to the world outwardly. There are elements that are for myself. There are elements that I want to present in a selective manner. And I think back to Frode’s point around bookshelves for various parts of my identity. I think that’s really important. There might be some that I will want to publicly present, and others that I won’t. If you think about

    a lot of social platforms, how young people use social platforms, think about Instagram. Actually, on Tumblr, which is a great example, the average user had four to five accounts. And that’s because they had accounts that they used for performative reasons. And they had accounts that they used for themselves. And had accounts for specific parts of their identity. And that’s because we’re solving different needs through this idea of para-socially curating and putting out there what we’re interested in. So, just riffing on your point. Not necessarily addressing it, but sort of adding colour to it.

    David Millard: No, that’s great. Thank you. So, you’re right about the multiple accounts thing. I had a student, a few years ago, who’s looking at privacy protection strategies. I’m basically saying, people, don’t necessarily use the preferences on their social media platforms, who can see my stuff. They actually engage differently with those platforms. So they do like that, as you said. They have different platforms, or they have different accounts, for different audiences. They use loads of fascinating stuff, things like social stenography, which is, if they have in-jokes or hidden messages to certain crowds, that they will put in them, their feeds will never miss it. There are all of these really subtle means that people use. I’m sure that all comes into play for this kind of stuff as well.

    Jad Esber: Totally. I’ll add to that really quickly. So, if you look at... I did a study of Twitter bios, and it’s really interesting to look at how, as you said, young folks will put very cryptic acronyms that indicate or signal their fanships. They’re looking for other folks who are interested in the same K-pop band, for example. And that acronym in the bio will be a signal to that audience. Like, come follow me, connect with me around this topic, just because the acronym is in there. A lot of queer folks will also have very subtle things in their bios, on their profile to indicate that. But only other queer folks will be aware of. And so, again, it’s not something you necessarily want to be super public and performative about, but for the right folk, you want them to see and connect with. So, yeah. Super interesting how folks have designed their own way of using these things to solve for very specific needs.

    Frode Hegland: Just before I let you go, Dave. Did you say steganography or did you say stenography?

    David Millard: I think it’s steganography. It’s normally referred to as hiding data inside other data but in a social context. It was exactly what Jad and I was just saying about using different hashtags or just references, quotes that only certain groups would recognise that kind of stuff, even if they’re from Hamilton.

    Frode Hegland: Brendan, I see you’re ready to pounce here. But just really briefly, one of the things I did for my PhD thesis is, study the history of citations and references. And they’re not that old. And they’re based around this, kind of, let’s call it, “anal notion” we have today that thing should be in the correct box, in the correct order, if it isn’t, it doesn’t belong

    in the correct academic discipline. Earlier this morning, Dave, Mark, and I were discussing how different disciplines have different ways of even deciding what kind of publication to have. It’s crazy stuff. But before we got into that, we have a profession, therefore, we need a code of how to do it. The way people cited each other, of course, was exactly like this. The more obscure the better, because then you would really know that your readers understood the same space. So it’ s interesting to see how that is sliding along, on a similar parallel line. Brendan, please. Unless Jad has something specific on that point.

    Jad Esber: I was just sourcing a Twitter bio to show you guys. So, maybe, if I find one, I’ll walk through it and show you how various acronyms are indicating various things. And I was just trying to pull it from a paper that I wrote. But, yeah. Sorry, go ahead, Brendan.

    Frode Hegland: Okay, yeah. When you’re ready, please put that in. Brendan?

    Brendan Langen: Cool. Jad, really neat to hear you talk through, just really everything around identity as a scene online. It’s a point of a lot of the research I’m doing as well. So, interesting overlaps. First, I’ll kind of make a comment, and then I have a question for you that’s a little off base of what we talked about. But the bookshelf, as a representation, is extremely neat to think about when you have a human in the loop because that’s really where contextual recommendations actually come to life. This idea of an algorithm saying that we’ve read 70% of the same books, and I have not read this one text that you have held really near and dear to you might be helpful but, in all honesty, that’s going to fall short of you being able to share detail on why this might be interesting to me. So I guess to, kind of, pivot into a question, one of my favourite things that I read last year was something you did with, I forget the fella’s name, Scott, around reputation systems and novel approach, and so, I’m studying a little bit in this Web3 area, and the idea of splitting reputation, and economic value is really neat. And I’d love to hear you talk a little bit more about ‘Koodos’ and how, either you’re integrating that, or what experiments you’re trying to run in order to bring like curation and reputation into the fold. I guess like, what kind of experiments are you working on with ‘Koodos’ around this reputational aspect?

    Jad Esber: Yeah, absolutely. I’m happy to share more. But before I do that, I actually found an example of a Twitter bio, I’ll really quickly share, and then, I’m happy to answer that question, Brendan. So this is from a thing I put together a while ago, and if we look at the username here. So, ‘katie, exclamation mark, seven, four Dune’. So, the seven here actually is supposed to signal to all BTS fans, BTS being a K-pop band that she is part of that group, that fan community. It’s just that simple seven next to her name. Four Dune is basically a way for her to indicate that she is a very big fan of Dune, the movie, and Timothée Chalamet, the actor. And pinned at the top of her Twitter account is this list of the bands or the communities that she stands, stands meaning, being a big fan of. And so, again, sort of like, very

    cryptically announcing the fan communities she’s a part of just in her name, but also, very actively pinning the rest of the fan communities that she’s a member of, or a part of. But, yeah. I just want to share that really quickly. So, to address, Brendan, your questions, just for folks who aren’t aware of the piece, it’s basically a paper that I wrote about how to decouple reputation from financial gain in system and reputation systems, where there might be a token. So, a lot of Web3 projects promise community contributions will earn you money. And the response that myself and Scott Kominers wrote was around, “Hey, it doesn’t actually make sense for intrinsic motivational reasons, for contributions to earn you money. In fact, if you’re trying to build a reputation system, you should develop a system to gain reputation, that perhaps spins off some form of financial gain.{ So, that’s, sort of, the paper. And I can link it in in the chat, as well, for folks who are interested. So, a lot of what I think about with ‘Koodos’, the company that I’m working on, is this idea of, how can people build these digital spaces that represent who they are, and how can that may remain a safe space for identity expression, and perhaps, even solving some of the utilitarian needs. But then, how can we also enable folks, or enable the system, to curate at large, source from across these various spaces that people are building, to surface things that are interesting in ways that aren’t necessarily super algorithmic. And so, a lot of what we think about the experiments we run around how can we enable people to build reputation around what it is that they are curating in their spaces. So, does Mark’s curation of books in his bookshelf give him some level of reputation in specific fields? That then allows us to point to him as a potential expert on that space. Those are a lot of the experiments that we’re interested in running, just sort of, very high level without getting too in the weeds. But I’m happy to discuss, if you’re really interested in the weeds of all of that, without boring everyone, I’m happy to take that conversation as well.

    Brendan Langen: Yeah. I’ll reach out to you because I’m following the weeds there.

    Jad Esber: Yeah, for sure.

    Brendan Langen: Thanks for the high-level answer.

    Jad Esber: No worries, of course.

    Frode Hegland: Jad, I just wanted to say, after Bob and Fabien now, I would really appreciate it if you go into sales mode, and really pitch what you’re working on. I think, if we honestly say, it’s sales mode, it becomes a lot easier. We all have passions, there’s nothing wrong with being pushy in the right environment, and this is definitely the right environment. Bob?

    Bob Horn: Well, I noticed that your slides are quite visual and that you just mentioned visual. I wonder if, in your poetry life, you’ve thought about broadsheets? And whether you would have broadsheets in the background of coming to a presentation like this, for example,

    so that you could turn around and point to one and say, “Oh, look at this.”

    Jad Esber: I’m not sure if the question is if I... I’m sorry, what was the question specifically about?

    Bob Horn: Well, I noticed you mentioned that you are a poet, and poets often, at least in times gone by, printed their poems on larger broadsheets that were visual. And I associated that with, maybe, in addition to bookshelves, you might have those on a wall, in some sort of way, and wondered if you’d thought about it, and would do it, and would show us.

    Jad Esber: Yeah. So, the poetry that I used to write growing up was very visual, and it used metaphors of nature to express feelings and emotions. So, it’s visual in that sense. But I am, by no means, a visual artist or not visual in that sense. So, I haven’t explored using or pairing my poetry with visual compliments. Although, that sounds very interesting. So, I haven’t explored that. Most of my poetry is visual in the language that I use. And the visuals that come up in people’s minds. I tend to really love metaphors. Although, I realise that sometimes they can be confining, as well. Because we’re so limited to just that metaphor.

    And if I were to give you an example of one metaphor, or one word that I really dislike in the Web3 world it’s the ‘wallet’. I’m not sure how familiar you are with the metaphor of a wallet in Web3, but it’s very focused on coins and financial things, like what live in your

    physical wallet, whereas what a lot of wallets are today are containers for identity and not just the financial things you hold. You might say, ‘Well, actually, if you look into my wallet, I have pictures of my kids and my dog or whatever.’ And so, there is some level of storing some social objects that express my identity. I share that just to say that the words we use, and the metaphors that we use, do end up also constraining us because a lot of the projects that are coming out of the space are so focused on the wallet metaphor. So, that was a very roundabout answer to say that I haven’t explored broadsheets, and I don’t have anything visual to share with my poetry right now.

    Bob Horn: What is, just maybe, in a sentence or two, what is Web3?

    Jad Esber: Okay, yeah. Sure. So, Web3, in a very short sense, is what comes after Web2, where Web2 is what we as, sort of, the last phase of the internet that relied on reading and writing content. So if you think about Web1 being read-only, and Web2 being read and write, where we can publish as well. Web3 is read-write and on. So, there is an element of ownership for what we produce on the internet. And so, that’s, in short, what Web3 is. A lot of people associate Web3 with blockchains, because they are the technology that allows us to track ownership. So that’s what Web3 is in a very brief explanation. Brendan, as someone who’s deep in this space, feel free to add as well to that, if I’ve missed anything.

    Bob Horn: Thank you.

    Brendan Langen: I guess the one piece that is interesting in the wallet metaphor is that, I

    guess, the Web2 metaphor for identity sharing was like a profile. And I guess I would love to hear your opinion on comparing those two and the limitations of what even a profile provides as a metaphor. Because there are holes in identity if you’re just a profile.

    Jad Esber: Totally, yeah. Again, what is a profile, right? It’s a very two-dimensional, like... What was a profile before we had Facebook profiles? A profile when you publish something is a little bit of text about you, perhaps it’s a profile picture, just a little bit about you. But what they’ve become is, they are containers for photos that we produce and there are spaces for us to share our interests and we’re creating a bunch of stuff that’s a part of that profile.

    And so, again, the limiting aspect of the term ‘profile’ exists a lot of on what’s been developed today, again, just hinges on the fact that it’s tied to a username and a profile picture and a little bio. It’s very limiting. I think that’s another really good example. Using the term ‘wallet’ today, again, is limiting us in a similar way to how profiles limited us in Web2. If we were to think about wallets as the new profile. So that’s a really good point I actually hadn’t made that connection, so thank you.

    Fabien Benetou: Thank you. Honestly, I hope there’s going to be, let’s say, a bridge to the pitch. But to be a little bit provocative, honestly, when I hear Web3, I’m not very excited. Because I’ve been burnt before. I checked bitcoin in 2010 or something like this, and Ethereum, and all that. And honestly, I love the promise of the Cypherpunk movement or the ideology behind it. And to be actually decentralised or to challenge the financial system and its abuse speaks to me. I get behind that. But then, when I see the concentration back behind the different blockchains, most of the blockchains are rougher, then I’m like, “Well, we made the dream”. Again, from my understanding of the finance behind all this. And yet, I have tension, because I want to get excited, like I said, the dream should still live. As I was briefly mentioned in the chat earlier, civilians, capitalism, and the difference between doing something in public, and doing something on Facebook, it’s not the same. First, because it’s not in public, it’s not a proper platform. But then, even if you do it publicly on Facebook, is the system to issue value and transform that to money. And I’m very naive, I’m not an economist, but I think people should pay for stuff. It’s easy. I mean, it’s simple, at least. So, if I love your poetry, and I can find a way that can help you, then I pay for it. There is no need for an intermediary, in between, especially if it’s at the cost of privacy and potentially democracy behind. So that’s my tension, I want to find a way. That’s why I’m also about provenance, and how we have a chain of sources, and we can reattribute people back down the line. Again, I love that. But when I hear Web3 I’m like, “Do we need this?” Or can we can, for example, and I don’t like Visa or Mastercard, but I’m wondering if relying on the centralised payment system is still less worse than a Cypherpunk dream that’s been hijacked.

    Brendan Langen: Yeah, I mean, I share your exact perspective. I think Web3 has been

    tainted by the hyper-financialisation that we’ve seen. And that’s why, when Bob asked what is Web3, it’s just what’s after Web2. I don’t necessarily tie it, from my perspective to crypto necessarily. I think that is a means to that end but isn’t necessarily the only option. There are many other ways that people are exploring, that serve some of the similar outcomes that we want to see. And so, I agree with you. I think right now, the version of Web3 that we’re seeing is horrible, crypto art and buying and selling of NFTs as stock units is definitely not the vision of the internet that we want. And I think it’s a very skeuomorphic early version of it that will fade away and it’s starting to. But I think the vision that a lot of the more enduring projects in the space have around provenance and ownership, do exist. There are projects that exist that are thinking about things in that way. And so, we’re in the very early stages of people looking for a quick buck, because there’s a lot of money to be made in the space, and that will all die out, and the enduring projects will last. And so, I think decoupling Web3 from blockchain, like Web3 is what is after Web2, and blockchain is one of the technologies that we can be building on top of, is how I look at it. And stripping away the hyper- financialisation, skeuomorphic approaches that we’re seeing right now from all of that. And then, recognising also, that the term Web3 has a lot of weight because it’s used in the space to describe a lot of these really silly projects and scams that we’re seeing today. So, I see why there is tension around the use of that term.

    Frode Hegland: One of the discussions I had with the upcoming Future of Text work, I’m embarrassed right now, I can’t remember exactly who it was (Dave Croker), but the point was made that, version numbers aren’t very useful. This was in reference to Visual-Meta, but I think it relates to Web2. Because if the change is small you don’t really need a new version number, and if it’s big enough it’s obvious. So, I think this Web3, I think we all kind of agree here, is basically marketing.

    Jad Esber: It’s just a term, yeah. I think it’s just a term that people are using to describe the next iteration of the Web. And again, as I said, words have a lot of weight and I’m sure everyone here agrees that words matter. So yeah, I think, when I reference it, usually I’m pointing to this idea of read-write-own. And own being a new entry in the Web. So, yeah.

    Bob Horn: I was wondering whether it was going to refer to the Semantic Web, which Tim Berners-Lee was promoting some years ago. Although, not with a number. But I thought maybe they’ve added a number three to it. But I’m waiting for the Semantic Web, as well.

    Jad Esber: Totally. I think the Semantic Web has inspired a lot of people who are interested in Web3. So, I think there is a returning back to the origins of the internet, right? Ted Nelson’s thinking as well as a big inspiration behind a lot of current thinking in this space. It’s very interesting to see us loop back almost to the original vision of the Web. Yeah, totally.

    Brandel Zachernuk: You talked a little bit about algorithms, and the way that algorithms select. And painted it as ineffable or inaccessible. But the reality of algorithms is that they’re just the policy decisions of a given governing organisation. And based on the data they have, they can make different decisions. They can present and promote different algorithms. And so that ‘Forgotify’ is a take on upending the predominant deciding algorithm and giving somebody the ability through some measure of the same data, to make a different set of decisions about what to be recommended. The idea that I didn’t get fully baked, that I was thinking about is the way that a bookshelf is an algorithm itself, as well. It’s a set of decisions or policies about what to put on it. And you can have a bookshelf, which is the result of explicit, concrete decisions like that. You can have a meta bookshelf, which is the set of decisions that put things on it, that causes you to decide it. And just thinking about the way that there is this continuum between the unreachable algorithms that people, like YouTube, like Spotify, put out, and the kinds of algorithms internally that drive what it is that you will put on your bookshelf. I guess what I’m reaching for is some mechanism to bridge those and reconcile the two opposite ends of it. The thing is that YouTube isn’t going to expose that data. They’re not going to expose the hyper parameters that they make use of in order to do those things. Or do you think they could be forced to, in terms of algorithmic transparency, versus personal curation? Do you see things that can be pushed on, in order to come up with a way in which those two things can be understood, not as completely distinct artefacts, but as opposite ends of a spectrum that people can reside within at any other point?

    Jad Esber: Yeah. You touch on an interesting tension. I think there are two things. One is, things being built, being composable, so people can build on top of them, and can audit them. So, I think the YouTube algorithm, being one example of something that really needs to be audited, but also, if you open it, it allows other people to take parts of it and build on top of it. I think that’d be really cool and interesting. But it’s obviously completely orthogonal to YouTube’s business model and building moats. So composability is sort of one thing that would be really interesting. And auditing algorithms is something that’s very discussed in this space. But I think what you’re touching on, which is a little bit deeper, is this idea of algorithms not capturing emotions, and not capturing the softer stuff. And a lot of folks think and talk about an emotional topology for the Web. When we think about our bookshelf, there are memories, perhaps, that are associated with these books, and there are emotions and nostalgia, perhaps, that’s captured in that display of things that we are organising. And that’s not really very easy to capture using an algorithm. And it’s intrinsically human. Machines don’t have emotions, at least not yet. And so, I think that what humans present is context and that’s emotional context, nuance, that isn’t captured by machine curation. And so, that’s why, in the presentation, I talk a little bit about the pairing of the two. It’s important to scale things

    using programmatic algorithms, but also humans make it real, they add that layer of emotion and context. And there is this parable that basically says that human curation will end up leading to a need for algorithmic curation. Because the more you add and organise, the more there’s a need for then a machine to go in and help make sense of all the things that we’re organising. It’s an interesting pairing, what balance is important, and it’s an open question.

    Frode Hegland: Yeah, Fabien, please. But after that, Brendan, if you could elaborate on what you wrote in the chat regarding this, that would be really interesting.

    Fabien Benetou: It’s to pitch something to potentially consider linking with your platform, it’s an identity management targeting mostly VR, at least at first. And there is completely federated and open source. The thing is it’s very minimalist. It just provides an identity. And you have, let’s say, a 3D model and a name and a list of friends. I think that’s it. But if you were to own things, and you were to be able to either share or display them across the different platforms, I think it could be quite interesting. Because, in the end, we discussed this quite a bit, so I’m going to go back, but there is also a social or showcasing aspect to creation we want to exchange. Honestly, when I do something that I’m proud of, first thing I want to do is to show someone. I’m going to see if my better half is around, she’s not going to get it, but still, I can’t stop myself, I want to show it. I have a friend, they’ll get it, hopefully. I want to show you also here. And so, I want to build, and I want to show it. And I imagine a lot of the creation is, as soon as you find something beautiful, it’s like, “No, I don’t want to keep it to myself. I want to share with my people.” So, I’m wondering at which point that could also help this kind of identity platform or solution, because they were quite abstract in the sense that they’re not specific, let’s say, to one platform, they are on top of that. But then people think, “What for?” Okay, I can log in with, let’s say, Facebook or Apple. I know them. I trust them. So that’s it. I’m just going to click on that button. But it’s always a way for the identity maybe, like again, the discussion we had here is, my identity, me also, what I showcase around me that define me, and I want to not just share it to establish myself as, but also help others discover. So maybe it could be interesting to check how there could be a way to be more than an identity.

    Jad Esber: Totally if you think about DJs, their job is essentially, their profession is essentially to curate music and stitch things together. There are professions that centre around helping other people discover, and that that becomes work, right? So I think helping other people discover can be considered something that gives you back status or gives you back gratification in some form. Perhaps, it just makes you happier. But it also could give you back money and that it’s a profession. Arts curators, DJs. So, there’s a spectrum as well, I think a lot of folks will recommend it because they like it. They will recommend it because gives them some level of status. At the end of the spectrum, it becomes a job. Which I think is

    certainly an interesting proposition, is like, what does it look like if internet curators are recognised as professionals? Could there be a world where people who are curating high value stuff could be paid? And I think, Brendan alluded to this briefly, beyond just adding links, like the synthesis, the commentary is really valuable, especially with the overload that we have today. And so, I think I alluded to this idea of invisible labor, curation being invisible labor. What if it was recognised? And what if it became a form of paid work? I think that could also be very interesting as an extension to your thought around curating to help others.

    Fabien Benetou: So, sorry. I’ll just bounce back because it’s directly related, but I’m just going to throw it out there. If someone wants to tour through WebXR and have some of their favourite spaces and give me a bit of money for doing it, I’m up for attempting that. I know exactly how, but I think it could be quite interesting to have a tour together, and maybe put in our backpack whatever we like, or with whom we connect. And again, across platforms, not just one.

    Jad Esber: Totally, yeah. There is precedent to that in a way, like galleries, and museums are institutionalised, like spaces of curated works. We pay to enter them. Is there a way where we can bring that down to the individual, right? A lot of the past version of the Web is taking institutionalised things and making them user-generated. Is there a version of galleries or museums that are user-generated and owned? And that’s an exploration that we’re interested in, as well, at ‘Koodos’. So, something we’re exploring.

    Frode Hegland: Fabien, I saw you put a link here to web.immers.space. Reminds me to mention to you guys that someone from ‘Immersed’, the company that makes the virtual screens in Oculus will be doing a hosted meeting soon. On a completely different tangent from what this is about, but I just wanted to mention to you guys. Brendan, would you mind going further about what you’re talking about?

    Brendan Langen: Sure. I think it’s minimal, but the act of curation, I suppose, I should have qualified the type of research that I’m talking about. My background is in UX research. So, when you’re digging into any one of our experiences with a tool, and we run into a pinpoint, or we stop using, we leave the page. The data can tell us, we were here when this happened. But it takes so much inference to figure out what it actually was that caused it. Could be that we just got a phone call, and it was not a spam call for once, and we’re thinking, “Oh, wow. I have to pick this up and talk to my mother.” Or it could be that this is so frustrating, and as I kept clicking, and clicking, I just got overwhelmed, and I didn’t want to deal with it anymore. And everything between there. And that’s really where the role of user research comes in.

    And that was the comparison to curation, is that, we can only understand what feeling

    someone had, when they heard that song that changed their life, or read a passage that triggered a thought that they then wrote an essay out. And it’s something that I have to dive into further, and further. It’s like, the human is needed in the loop at all times. Mark and I have talked a lot about this. It does not matter how your data comes back to you, regardless, you’re gonna need to clean it. And you’re going to need to probe into it, and enrich it with a human actually asking questions.

    Jad Esber: Totally, yeah. That resonates very deeply. And I can share a little bit about ‘Koodos’, because I’ve alluded to it, but I will also share that it’s very early, and very experimental. So that’s why there isn’t really that much to share. But I think it centres around that exact idea of, how can we bottle or memorialise the feeling that we have around discovering that thing that resonated. And the experience, right now, centres on this idea of, “Hey. When I’m listening to this song, or I’m reading this article, or watching this video, and it resonates. What can I do with it to memorialise it, and to keep it, and to kind of create something based on it?” And so, right now, people create these cards that sort of link out to content that they love from across the Web. And on those cards, they can add context or commentary. And a lot of what people are adding tends to be emotional. The earliest experiment centred on people adding emojis, just emoji tags to the content to summarise the vibe of the content. And these cards are all time-stamped, so there’s also a way for you to see when someone came across something. And they’re all added to a library, or an archive, or a bedroom, or bookshelf, whatever you’re going to call it, that aggregates all the cards that you’ve created. So it becomes a way for you to explore what people are interested in. What they’re saying and feeling about the things that they come across that resonates. The last thing I’ll share, as well, is that these cards unlock experiences. So, if I created a card for Brendan’s paper, for example, I’ll get access to a collection, where other people have created cards for Brendan’s work live, and I can see all of what they commentated and created, and who they are, and maybe go into their libraries and see what it is that they are creating cards for. So, that’s the current experience. And again, in the early stages. Most of our users are quite young, that’s why I sort of speak a lot about identity formative years, when you’re constructing your identity being a really important phase in life. And so, our users are around that age. And that’s what we’re doing and we’re thinking about. And just provide some context for a lot of the perspectives that I share.

    Brendan Langen: I have to comment. I love the idea of prompting reflection. Especially at a stage where you are identity-forming. There’s nothing like cultivating your taste by actually talking about what you liked and disliked about something. And then, being able to evoke that in the frame of, how it made me feel in a moment, can build up a huge library of personal understanding. So, that’s rather neat. I need to check this out a little further.

    Jad Esber: Totally, yeah. We can chat further. I think the one big thought that has come about, from the early experimentation is that, people use it as a form for mental health reasons. Prompting you to reflect, or capture emotion over time, and archiving what has resonated, and what you felt over time is a really healthy thing to do. So that was an interesting outcome of the early product.

    Closing Comments

    Frode Hegland: There are so many opportunities with multiple dimensions of where this knowledge can go. We also have, upcoming, Phil Gooch from Scholarcy, who will be doing a presentation. He doesn’t do anything with VR, AR or anything. But what he does do is, scholarcy.com analyses documents, academic documents. So they do all kinds of stuff that seems to be on more of the logical side, where it seems, Jad, you’re more of the emotional side. And I can imagine, specifically for this community, the insane amount of opportunities for human interactions in these environments. And then how we’re going to do the plumbing to make sure it is vulnerable. You said earlier, when defining Web3.0, one of the terms is ownable. The work we’ve been doing with Visual-Meta is very much about, we need to be able to own our own data. So, it was nice to hear that in that context. We’re winding down.

    It’s really nice to have two hours, so it’s not so rushed. So we can actually listen to each other. Are there any closing comments, questions, suggestions, or hip-hop improvisations? Fabien Benetou: I’m not going to do any hip-hop improvisation, not today at least. Quick

    comment, though is, I wouldn’t use such a platform. And also, I would say, without actually

    owning it, meaning for example, at least a way to export data, and have it in a meaningful way And I don’t pour my life into things, because especially here, is the emotional aspect without some safety, literal safety of being able to extract it, and ideally live, because I’m a programmer. So, if I can tinker with the data itself, that also makes it more exciting for me. But I do hope there is some way to easily, conveniently do that and hopefully, there is a need to consider leaving the platform. Tinkering I think it’s always worthwhile. No need to leave, but it’s still being able to actually have it do whatever you want. I think is pretty precious.

    Jad Esber: Yes, thank you. Thank you for sharing that, Fabien. And absolutely. That’s a very important consideration. So, the cards you create are tied to you, not to the space that you occupy or you create on ‘Koodos’. That’s a really key part of the architecture. And I hear you on the privacy and safety aspect. Again, this is a complex human system and so, when designing them, beyond the software you’re building, I think the social design is really important. And aspects of what is in the box, that’s for yourself. The trinkets that you keep to yourself, versus the cards that are the books that you present to the rest of the folks that come

    into your space. I think is an important design question. So, yeah. Thank you for sharing, Fabien.

    Fabien Benetou: A quick little thing, that is a lot more open, let’s say, unfortunately, I can’t remember the name, but three or four years ago, there was a viewer experience done by Lucas something, maybe somebody will remember, where you had like a dozen or two dozens of clouds on top of your head, couple of scenes, and you could pull a cloud, in order to listen to someone else’s voice. And each space, virtual space was a prompt to, when is the last time you cried? Yes, www.lucasrizzotto.com. And so, his experience must be there in his portfolio, is three or four years old. But maybe half a dozen different spaces, with different ambiance, different visuals, and sounds. And every time prompting, well, I don’t know, what’s the meaning of life, simple, easy questions. And then, if you want to talk, you can talk and share it back with the community. And if you don’t want to talk. you don’t have to. So, it’s not what you do, but I think there are some connections, some things could be inspiring, also, to check it out.

    Jad Esber: I guess, on my part, I just want to say thank you for the conversation, and for being here for the two hours. It’s a long time to talk about this stuff. But I appreciate it. And yeah, I look forward to, hopefully, joining future sessions, as well. Sounds like a really interesting string of conversations. And it’s great to connect with you all virtually and to hear your questions and perspectives. Yeah, thank you.

    Frode Hegland: Yeah. It’s very nice to have you here. And the thing about the group is, okay, we are today, except for Dene, we’re all male and so on. But we do represent quite a wide variety of mentalities. And this is something we need to increase as much as we can. It is crucial. And also, I really appreciate you bringing in, literally, a new dimension dealing with emotions and identities into the discussion. So, it’s going to be very interesting moving forward. I was not interested in VR, AR at all in December. And then, Brandel came into my life. And now it is all about, I’m actually decided I can use the word metaverse because Meta doesn’t own it, I’ve decided to settle down. But the point is, I feel we’re already living in the metaverse. We’re just not seeing it through as many rich means as we can. And I don’t want to go into the metaverse with only social and gaming. And today, thank you for highlighting that we need to have our identities managed in this environment, and taken with us. So, I’m very grateful. And I look forward to seeing those of you who can on Friday. And we’re going to be doing, as I said, every two weeks presentations in this format.

    Fabien Benetou: I have a quote for this. It’s on my desktop, actually. It’s, “When technology shifts reality, will we know the world has changed?” it’s from Ken Perlin that we mentioned last time. I’ll put it in the chat.

    Gavin Menichini

    Journal Guest Product Presentation : 25 February 2022

    https://youtu.be/2Nc5COrVw24?t=1353

    Gavin Menichini: Immersed is a virtual reality product, working productivity software, where we make virtual offices. And so, what that means is, Immersed is broken down into two categories, in my opinion. We have a solo use case, and we have a collaboration meeting use case. So, the main feature that we have in Immersed is the ability to bring your computer screen, whether you have a Mac, a PC, or Linux, into virtual reality. So, whatever is on your computer screen is now brought to Immersed. And we’ve created our own proprietary technology to virtualize extensions of your screen. Very similar to, if you had a laptop or computer at your desk, and you plugged in extra, physical monitors, from our screen real estate. We’ve now virtualized that technology. It’s proprietary to us. And we’re the only ones in the world who can do that. And then, now at Immersed, instead of you working on one screen, for example, I use the MacBook Pro for work, so instead of me working on one MacBook Pro, with an Oculus Quest 2 headset, or a compatible headset, I can connect it to my computer, have a Immersed software on my computer, in my headset, bring my screen into virtual reality, have the ability to maximize it to the size of an iMac screen. I can shrink it and then create up to five virtual monitors around me for a much more immersive work experience for your 2D screens. And you can also have your own customized avatar that looks like you, and you can beam into all these cool environments that we’ve created. Think of them as higher fidelity, higher quality video game atmospheres. But not like a game, more like a professional environment. But we also have some fun gaming environments, or space station offices, or a space orbitarium, auditorium. We have something called alpine chalet, like a really beautiful ski lodge. Really, the creativity is endless. And so, within all of our environments, you can work there, and you can also meet and collaborate with people as other avatars, instead of us meeting here on zoom, where we’re having a 2D, very disconnected experience. I’m sure each of you probably heard the term Zoom fatigue or video conference fatigue? That’s been very real, especially with the COVID pandemic. And so, fortunately, that’s hopefully going away, and we can have a little bit more in-office interactions. But we believe Immersed is the perfect solution for hybrid and remote working. It’s the best tech bridge for recreating that sense of connection with people. And that sense of connection has been very valuable for a lot of organizations that we’re working with, as well

    as enhancing the collaboration experience from our monitor tech, and our screen sharing, screen streaming technology. So, people use it for the value, and the value that people get out of it is that, people find themselves more productive when working in Immersed, because now, they want to have more screen real estate, like all the environment we’ve been potentially created, to help preach cognitive focus. So, I have lots of news for customers and users who tell us that when they’re Immersed. They feel hyper-focused. More productive. In a state of deep workflow, whatever term you want to use. And people are progressing through the work faster, and feel less distracted. And then, just also, generally more connected, because when you’re in VR, it really feels like you have a sense of presence when you’re sitting across from a table from another avatar that is your friend or colleague. And that really boosts employee and person satisfaction, connection, just for an overall engaging, better collaborative experience when working remotely. Any questions around what I explained, or what Immersed is?

    Dialogue

    https://youtu.be/2Nc5COrVw24?t=1549

    Fabien Benetou: Super lovely. When you say screen sharing, for example, here I’m using Linux. Is it compatible with Linux? Or is it just Windows or macOS? Is it web-based?

    Gavin Menichini: So, it is compatible with Linux. And so, right now, you can have virtual monitors through a special extension that we’ve created. We’re still working on developing the virtual display tech to the degree we have for Mac and Windows. Statistics says that Linux is only one of two percent of our user base. And so, for us, as a business, we obviously have to optimize for most of our users. Since we’re a venture-backed startup. But that’s coming in the future. And then, you can also share screens with Linux. And so, with some of the extensions, you can use it for having multiple Linux displays, you can share those screens, as well, within Immersed.

    Video: https://youtu.be/2Nc5COrVw24?t=1594

    Alan Laidlaw: That’s great. Yeah, this is really impressive. This is a question that may be more of a theme to get into later. But I definitely see the philosophy of starting with, where work is happening now, and like the way that you make train tracks, bringing bits and pieces into VR so that you can get bodies in there. I’m curious as to, once that’s happened or once you feel like you’ve got that sufficiently covered, is there a next step? What would you want the collaborative space in VR to look like that is unlike anything that we have in the real world, versus... Yeah, I’d love to know where you stand philosophically on that, as well, as whatever the roadmap is?

    Gavin Menichini: Sure. If I’m understanding your question properly, it’s how do we feel about how we see the evolution of VR collaboration, versus in-person collaboration? If we see there’s going to be an inherent benefit to VR collaboration as we progress, versus in person?

    Alan Laidlaw: Yeah, there’s that part. And there’s also, the kind of, is the main focus of the company to replicate and provide the affordances that we currently have, but in VR? Or is the main focus, now that you know once we’ve ported things into a VR space, let’s explore what VR can do?

    Gavin Menichini: Okay. So, it’s a little bit of both. It’s mostly just, we want to take what’s possible for in-person collaboration and bring it into VR, because we see a future of hybrid remote working. And so, COVID, obviously, accelerated this dynamic. So, Renji, our founder, started the company in 2017, knowing, believing that hybrid remote work was gonna become more and more possible as the internet and all things Web 2.0 became more prevalent. And we have technology tools where you don’t have to drive into an office every single day to accomplish work and be productive. But we found that the major challenges were, people aren’t as connected. The collaboration experience isn’t the same as being in person. So those are huge challenges for companies, in a sense of a decrease in productivity. So, all these are major challenges to solve. And those are the challenges that Renji set out to go build and fix with Immersed. So when we think about the future, we see Immersed as the best tech bridge, or tool for hybrid or remote working. Where you can maximize that sense of connection that you have in person, by having customizable avatars, where fidelity and quality will increase over time, giving you the tech tools through multiple monitors and solo work. Enhancing the solo work experience. So people become more productive, which is the end goal of giving them more time back in the day. And then also, corporations can continue to progress, as well, in their business goals, while balancing that with giving employees more time back of their day to find that beautiful balance. And so, we see it as a tech bridge, but we, as a VR company, we’re also are exploring the potentials of VR. Is there something that we haven’t tapped into yet that could be extremely valuable for all of our customers and users to add more value to their life and make their life better? So, it’s less so that, it’s more so we want to virtualize, make the hybrid remote collaboration, work experience, much more full, better value, with more value than it currently exists today with the Zoom, Slack, Microsoft Teams paradigm.

    Brandel Zachernuk: Yeah, I’m curious. It sounds like, primarily, or entirely, what you’ve built is the the connective tissue between the traditional 2D APPs that people are using within their computer space, and being able to create multi-panels, that people are interacting with that content on. Is that primarily through traditional input? Mouse, keyboard, trackpad? Or is

    this something where they’re interacting with those 2D APPs through some of the more spatial modalities that are offered hands or controllers? Do you use hands or is it all entirely controller-based?

    Gavin Menichini: Yeah, great question. So, the answer is, our largest user base is on the Oculus Quest 2. It’s definitely the strongest headset, bang for your buck on the market for now. There’s no question. But, right now, you can control your VR dynamics with the controllers or with hand tracking. We actually suggest people use hand tracking, because it’s easier, once you get used to it. One of the challenges we face right now is, there is an inherent learning curve for people learning how to interact with VR paradigms. And, as me being on a revenue side, I have to demonstrate Immersed to a lot of different companies and organizations, and so it can be challenging. At some point, I imagine it would be very similar. And I was born in 95, and so I wasn’t around these times. But I imagine it feels like demoing email to someone for the first time, on a computer, and they’ve never seen a computer, where they totally understand the concept of email. No more paper memos, no more post-it notes.

    Paper organization and file cabinets, all exist in the computer, and they get it. But, when I put a computer in front of them for the first time, they don’t know how to use it. What’s this track? They had the keyboard, the mouse, they don’t understand the UI, UX of the Oculus, the OS system. They don’t understand how to use that, so it’s intimidating. So, that’s the challenge we come across. And then, that answers your point with your first question, Brandel?

    Brandel Zachernuk: Yeah, I’ve got some follow-ups, but I’ll cede the floor to Frode. Frode Hegland: Okay. I’m kind of on that point. So, I have been using Immersed for a bit. And the negatives, to take that first, is that I think the onboarding really needs help. It’s nice

    when you get that person standing to your side and pointing out things, but then... So, the

    way it works is, the hand tracking is really good. That is what I use. I use my normal keyboard, physical keyboard on my Mac, and then I have the monitor. But it’s, to me, a little too easy to go in and out of the mode where my hands change the position and size of the monitor. You’re supposed to do a special hand thing to lock your hands to not be doing that. And so there’s pinning. So, when you’re talking about these onboarding issues, that’s still a lot of work. And that’s not a complaint about your company. That’s a complaint across the board. The surprise is also, it really is very pleasant. I mean, here, in this group, we talk about you know many kinds of interactions, but what I would like, in addition to making it more locked, to make the pinning easier. I do find that, sometimes, it doesn’t want to go exactly where I want. I’m a very visual person, kind of anal in that way, to use that language. I want it straight ahead of me, but very often it’s a little off. So, if I resize it this way, then it kind of follows. So, in other words, I’m so glad that you are working on these actual realities, boots

    on the ground thing, rather than just hypotheticals. Because it shows how difficult it is. You get this little control thing on your wrist, if there was one that says “hyper control mode”, different levels. Anyway, just observation, and question, and point.

    Gavin Menichini: Yeah. I can assure you that we obsess over these things internally. Our developers are extremely passionate about what we’re building. We have a very strong XR team. And our founder is very proud about how hard it is to get to our company, and how many people we reject. So, we really are hiring the best talent in the world, and I’ve seen this first-hand, getting to work with them. And we also have a very strong UI, UX team. But we’re really on the frontier of, this has never been done before. And we are pioneering. What does it mean to have excellent UI, UX paradigms and user onboarding paradigms in virtual reality? And one of the challenges we face is that, it’s still early. And so people are still trying to figure out, even foundations for what is good UI, UX. And we’re now introducing space, like spatial computing. And we’re going from 2D interfaces to 3D. What have we learned from good UI, UX or 2D translate to 3D, and paradigms of this? And people are now not just using a controller and mouse, they’re using hand tracking and spatial awareness. And how do we build good, not only do we understand what’s a good practice for having good paradigms in UI, UX, how do we code that well? And how do we build a good product around that, while also having dependencies on Oculus, HTC, and Apple? Where we’re dependent upon hardware technology to support our software. So we still live very much in the early days, where there’s a lot of tension of things are still being figured out. Which is why we’re a frontier tech. Which is why it takes time to build. But even with VR, AR, I think, it’s just going to take longer because there are so many more factors to consider. The people who pioneered 2D technology, Apple, Microsoft, etc, they didn’t have to consider. And so, I think the problem we’re solving candidly is exponentially harder than the problem they had to solve. But we also get to stand on their shoulders, and take some precedence that they built for us, and apply that to VR, where it makes sense.

    Brandel Zachernuk: So, in terms of those new modalities. In terms of the interaction paradigms that seem to make the most sense, it sounds like you’re not building software that people use, as much as you’re using making software that people reach through to their other software with, at this point. Is that correct? You’re not making a word processor, you’re making the app that lets people see that word process. Which is a big problem. I’m not minimizing it. My question is:

    Do you have observations based on what people are using the way that they’re changing, for example, the size of their windows, the kinds of ways that they’re interacting with it? Do you have either observations about what customers are doing as a result of making the transition into effective productivity there? Or do you have any specific recommendations about things

    that they should avoid or reconsider given the differences in, for example, pixel density, or the angular fidelity of hand tracking within 3D, in comparison to the fidelity of being able to move around a physical mouse and keyboard? Given that those things are so much more precise. But also, much more limited in terms of the real estate that they have the ability to cover. Do you have any observations about what people do? Or even better, any recommendations that you make to clients about what they should be doing as a result of moving into the new medium?

    Gavin Menichini: Yeah, really good question. There are a few things. There’s a lot of things we could suggest. So, a lot of what we’re building is still very exploratory, of what’s the best paradigm for these things? And so, we’ve learned a lot of things, but we also understand there’s a lot more for us to build internally and explore. First and foremost, we definitely do not take, hopefully, this is obvious, but to address it, we definitely do not take a dystopian view of VR, AR. We don’t want people living in the headset. We don’t want people strapped it to their face extremities, like a feeding tube and water, etc. That’s not the future we want. We actually see VR, AR as a productivity enhancer, so people can spend less time working, because they’re getting more done in our products, because we’ve created a product so good that allows them to be more productive, so they get more done at work, but also, have more time to themselves. So, we suggest people take breaks, we don’t want you in a headset for eight hours straight. The same way no person would suggest for you to sit in front of your computer, and not stand, use the restroom, eat lunch, go on a walk or take a break. We could take the same paradigms. Because you can get so focused on Immersed, we also encourage our users to like, “Yeah, get stuff done, but take a break”. But then we’re also thinking through some of the observations we found. We’ve been surprised at how focused people have been. And the onboarding challenge is a big challenge, as Frode was mentioning. It’s one that we think about often. How do we make the onboarding experience better? And we’ve made progressions based on where we came from in the past. So, Frode, you’re seeing some of the first iterations of our onboarding experience, in the past, we didn’t have one.

    There’s something we actually pushed really hard for. We saw a lot of challenges of users sticking around because we didn’t have one. And we’re now continuing to push how do we make this easier. Explain things to people without making it too long, where people get uninterested and leave. It’s a really hard problem to solve. But we found, as we’re having an easier onboarding experience, helping people get used to the paradigms of working in VR and AR, and explaining how our technology works, and letting them get to, what we like to call this magic moment, of where they can see the potential of seeing and having their screens in VR. Having it be fully manipulative, you’re like the Jedi in the force. You can push and pull your screens with hand tracking, to pinch and expand. Put them all around you. If I’m

    answering your question, Brandel, we’re still exploring a lot of paradigms. But we found that it’s surprising how focused people are getting, which is awesome and encouraging. We find, which isn’t surprising as much anymore, companies, organizations, and teams are always very wild at how connected they feel to each other. So we always try to encourage people to work together. So, even on our elite tier, which is just our middle tier, like a pro think of it as a pro solo user, you have the ability to collaborate with up to four people in a private room.

    But we also have public spaces, where people can hang out and it’s free to use. Just think of it as a virtual coffee shop. You can hang out there, and meet with people. You can’t share your screens, obviously, for security reasons. But you can meet new people and collaborate. And it’s been cool to see how we’ve informed our own community where people can be connected with each other to be able to hang out and meet new people. So, hopefully, that answers a little bit of your question. There’s still a lot more we’re learning about the paradigms of working in 2D screens, and what people prefer, what’s the best practice.

    Brandel Zachernuk: Yeah. One of the issues that I face when I think about where people can expect to be in VR productivity at this point, is the fact that Quest 1, Quest 2 and Vive, all of these things have a focal distance. Which is pretty distant, normally a minimum accommodation distance is about 1.4 meters, which means that anything that’s at approximately arm’s length. Which is where we have done the entirety of our productivity in the past. Is actually getting to within eye strain territory. The only headset that is out on the market that has any capacity for addressing that kind of range is actually the Magic Leap.

    Which I don’t recommend anybody pursue, because it’s got a second focal plane at 35 centimetres. Do you know where people put those panels on Quest? On Vive? I don’t know if you’ve got folks in a crystal or a coral value, whether that has any distinction in terms of where they put them? Or alternatively, do you recommend or are you aware of anybody making any modifications for being able to deal with a closer focal distance? I’m really interested in whether people can actually work the way they want to, as a consequence of the current limitations of the hardware at the moment.

    Gavin Menichini: Yeah. There are a few things in response to that. One: We’ve actually found, internally, even with the Quest 2, although the screen distance, et cetera, focal point, is a challenge, we’ve actually found that people in our experience are reporting less eye strain working in VR, than they are working from their computer. We’re candidly still trying to figure out why that’s the case. I’m not sure how the distance and the optics games that they’re playing in the Quest 2 and other headsets we use. But we’ve actually found that people are reporting less eye strain, just solely on customer reviews and feedback. So we haven’t done any studies. I personally don’t know a lot around IPDs and focal length distance of the exact hardware technology of all the headsets on the market. All I’m doing is paying attention to

    our customers, what they’re saying, and our users. And we’re actually, surprisingly, not getting that much eyestrain. We’ve actually said that a lot of people say they prefer working in VR than from their computers, without even blue light glasses. And they’re still getting less eye strain. So, the science and technicalities of how it’s working, I’m not sure. It’s definitely out of my realm of expertise. But I can assure you that the hardware manufacturers, because of our close relationship with Meta, HTC, they’re constantly thinking about that problem too, because you’re strapping an HMD to your face, how do you have a good experience from a health standpoint for your eyes?

    Brandel Zachernuk: Do you know how much time people are clocking in it?

    Gavin Menichini: On average, our first user session is right around an hour 45 minutes to two hours. And we have power users who are spending six to eight hours a day inside of Immersed, clocking that much time in and generating getting value out of it. And it’s consistent. And I’m not sure what our average session time is. I would say it’s probably around an hour, two hours. But we have people who use it for focus first, where they want to go and focus sessions on Immersed, or people will spend four or five hours in it, and our power users will spend six, seven, eight hours.

    Frode Hegland: I can address these few points. Because, first of all, it’s kind of nice. I don’t go on Immersed every week, but when I do, I do get an email that says how many minutes I spent in Immersed, which is quite a useful statistic. So, I’m sure, obviously, you guys have more on that. When it comes to the eye strain, I tend to make the monitor quite large and put it away to do exactly the examination you’re talking about, Brandel. And I used to not like physical monitors being at that distance. It was a bit odd. But since I am keyboard, trackpad, where I don’t have to search for a mouse, I don’t need to see my hands anyway, even though I can. I do think that works. But maybe, Gavin, would you want to, you said you had a video to share a little bit of what it looks like?

    Gavin Menichini: Sure, yeah. I can pull that up real quick. So it’s a quick marketing demo video, but it does do a good job of showcasing the potential of what’s possible. And I’m not sure if you guys will be able to hear the audio. It’s just fun background music. It’s not that important. The visuals are what’s more important. Let me go ahead and pull this up for us real quick.

    Frode Hegland: I think you can just mute the audio and then talk if you want to highlight something, I guess.

    Gavin Menichini: Okay. Actually, yeah. That’s probably a good idea. So, this is also on YouTube. So just for each of your points, if you guys are curious and want to see more content, just type in Immersed VR on YouTube. Our Immersed logo is pretty clear. Our content team and marketing team put out a lot of content, so if you’re curious. We also have a

    video called “Work in VR, 11 tips for productivity”, where a head of content goes through some different pro tips. If you’re curious and just want to dive in more of a more nuanced demo of how you do things, etc, to see more of the user experience. So, this is a good, helpful high level video. So you can see you can have full control of your monitor. You can make it ginormous, like a movie screen. We have video editors, day traders, finance teams, and mostly developers are our main customer base. As you can see here, the user just sitting down at the coffee table, the keyboard is tracked. We also have a brand new keyboard feature coming out, it’s called keyboard passthrough, where we’ll leverage the cameras of your Oculus Quest to hold the VR and see your real-life keyboard, which we’re very excited about. And here you can just see just a brief collaboration session of two users collaborating with each other side by side. You can also incorporate your phone into VR, if you want to have your phone there. And then, here you’ll see what it looks like to have a meeting in one of our conference rooms. So, you can have multiple people in the room, we usually had 30 plus people in an environment, so it can easily support that. It also depends on, obviously, everyone’s network strength and quality, very similar to Zoom, or phone call. And that shows how quality the meeting is from their audio and screen sharing input, but if everyone’s on a good network quality, that’s not an issue. And then, lastly here, you can see one of our users with five screens, working in a space station. And that’s about it. Any questions or things that stood out from that, specifically?

    Frode Hegland: Yeah. A question about the backgrounds. You have some nice environments that can be applied. I think we can also import any 360 images, is that right, currently? And if so, can we also load custom 3D environments in the future? Are you thinking about customization for that aspect of it?

    Gavin Menichini: Yes. So, we are thinking about it, and we do have plans for users to incorporate 3D environments. There are a few challenges with that, for a few obvious reasons, which I could touch on a second. But we do support 360 environments, 360 photos for users to incorporate. And we also have a very talented artist and developer team that are constantly making new environments. And we have user polls, and we figure out what our users want to build and what they’d like to see. And as we, obviously, continue to grow our company, right now we’re in the process of fundraising for a series, and once we do that, we’re hoping to go from 27-28 employees right now, to at least 100 by the end of the year. The vast majority of them will be developers to continue to enhance the quality of our product. And then, we also will support 3D imports of environments. But because the Quest 2 has some compute limitations, we have to make sure that each of our environments have specific poly counts, and specific compute measurements, so that the Quest 2 won’t explode if they try and open that environment in Immersed, as well as making sure that your

    Immersed experiences can be optimized in high quality and not going to lag, et cetera. So right now, we’re thinking: How do we enable our users to build custom environments? And then, two: How do we make sure they meet our specific requirements for the Quest 2. But naturally, over time, headsets are getting stronger, computing powers are getting better. Very similarly when you go from a Nintendo 64 graphics, to now the Xbox Series X. The ginormous quality. Headset quality will be the same. So, we’ll have more robust environments to have some more, give and take optimizations for environments our users give to us. So it isn’t our pipeline, but we’re pushing it further down the pipeline than we originally wanted. Just doe to some natural tech limitations. And also the fact that we are an adventure back startup, and we have to be extremely careful of what we work on, and optimize for the highest impact. But we’re starting to have some more fun and having some traction in our series A conversations. And hopefully have some more flexibility, financially, to continue pushing.

    Alan Laidlaw: Yes. So, this is maybe a, kind of, Twilio-esque question about the design material of network strength bandwidth and compute, like you mentioned. And I’m wondering, I saw in the demo, the virtual keyboard that, of course, the inputs would be connected to a network versus a physical keyboard that you already have in front of you, if it were possible to use the physical keyboard and have those inputs go into the VR environment, or AR environment, in this case, would that be preferred? Is that the plan? And if so, you know, that opens up, I mean, this is such a rich pioneer, as you mentioned, territory, so many ways to handle this. Would there be a future where, if my hands are doing one thing, then that’s an indication that I’m in my real world environment, but if I hand at something else and that’s suggesting, you know, take my hand into VR, so I can manipulate something? I’m curious about. Any thoughts about, essentially, that design problem, versus the hard physical constraints of bandwidth? Is it just easier? Does it make a better experience to stick with a virtual keyboard for that reason? So, you don’t, at least, have a disconnect between real world and VR? And I’m sure there are other ways to frame that question.

    Gavin Menichini: No, that’s fine. And I can answer a few points and a few follow up questions to make sure I understand you correctly. For the keyboard, specifically, the current keyboard tracking system we have in place is not optimal. It was just the first step of what we wanted to build to help make the typing VR problem easier, which is our biggest request. So we are now leveraging, I think, a way stronger feature, which is called “Keyboard pass- through”. So, for those who you know, the Oculus Quest 2 has a pass-through feature, where you can see the real world around you through the camera system, and they’re stitching the imagery together. We now have the ability to create a pass-through portal system, where you can cut out a hole in VR over your keyboard. So, whatever keyboard you have, whether it’s

    Mac, Apple, whatever. The funky keyboards, that a lot of our developers really like to use for a few reasons, you can now see that keyboard in your real hands through a little cut-out in VR. And then, when it comes from inputs, of what you mentioned of doing something with your hands, it being a real life thing versus VR thing. Are you referring to that in regards to having a mixed reality headset where it can do AR and VR and you want to be able to switch from real world to VR with the hand motion?

    Alan Laidlaw: Yeah. A piece of my question. I can clarify. I am referring to mixed. But specifically where that applies is the cut-out window approach, is definitely a step in the right direction. But it seems that’s still based entirely on the Oculus understanding of what your fingertips are doing. Which will obviously have some misfires. And that would be an incredibly frustrating experience for someone who’s used to a keyboard always responding, hitting the keys that you’re supposed to be hitting. So, at some point, it might make more sense to say, “Okay, actually we’re going to cut out. We’re going to forget the window approach and have the real input from the real keyboard go into our system”.

    Gavin Menichini: So, that’s what it is, Alan. Just to further clarify, we always want our users to use their real hands on the real keyboard. And you’re not using your virtual hands on a virtual keyboard. You’re now seeing, with pass-through, your real hands and your real keyboard, and you’re typing on your real keyboard.

    Frode Hegland: A really important point to make in this discussion is, if for a single user, there are two elements here: There is the thing around you image of 3D, and then you have your screen. But that is the normal Mac, Linux or Windows screen. And you use your normal keyboard. So, I have, actually, used my own software. I’ve used Author to do some writing on a big nice screen, so it is exactly the keyboard I’m used to.

    Alan Laidlaw: Right. So, how that applies to the mixed reality question is, if I’m using the real keyboard, have the real screen, but one of my screens is an iPad, a touch screen, that’s in VR, where I want to move some elements around, how do I then, transition from my hands in the real world to now I want my hand to be in VR?

    Gavin Menichini: So, you’re going to be in Immersed, as of now. You’re going to be in VR, and you’re going to have a small cut out into the real world. And so, it’s just, right here is a real world, through a cutout hole, and then, if you have your hands here, and you want to move your hands into here, the moment your hands leave the pass-through portal in VR, it turns into virtual hands. And so, to further clarify, right now, your virtual hands, you have in hand tracking, will still be over your hands on the pass-through window. We’re experimenting taking that out for further clarity of seeing your camera hands on your keyboard. But, yes. When you’re in Immersed, it’ll transition from your camera hands, real life hands, to virtual hands. If you have an iPad and you want to swipe something, whatever,

    it’s that’s seamless. But then, for mixed reality dynamics, in the future, we’re not sure what that’s going to look like, because it’s not here yet. So, we need to experiment, figure out what that looks like.

    Fabien Benetou: Yeah, thank you. It’s actually a continuation of your question because you asked about the background environment using 360, and including the old model. It’s also a question that you know I was going to ask, and I guess Gavin did, because I’m a developer, you can imagine it too. If it’s not enough, if somehow there are features that I want to develop, and they are very weird, nobody else will care about it, and, as you say, as a start-up you can’t do everything, you need to put some priorities. What can I do? Basically, is it open source? if not, is there an API? If there is an API, what has the community built so far?

    Gavin Menichini: Yeah, great question. So, as of now, we currently don’t have any APIs or open SDKs, open source code for users to use. We’ve had this feature request a lot. And our CEO is pondering what his approach wants to be in the future. So, we do want to do something around that in the future. But, because we’re still so early stage, and we have so many things we have to focus on, it’s extremely important that we’re very careful with what we work on, and how focused, and how hard working we are towards those. As we continue to progress as a company, and as our revenue increases, as we raise subsequent rounds of funding, that gives us the flexibility to explore these things. And one of the biggest feature requests we’ve had is having an Immersed SDK for our streaming monitor technology so people can start to play with different variations of what we’re building. But I do know that Renji does not allow for any free, open source coding work whatsoever. Just for a few reasons legality-wise, and I think we had a few experiences in the past where we experiment with that, and it backfired to where developers were claiming they owed, they deserved equity, or funding. It was a hot mess. So, we don’t allow anyone to work for us for free, or to give us any form of software, to any regard, any work period, to prevent any legal issues, to prevent any claims like that ,which is kind of unfortunate. But he’s a stickler and definitely will not budge on that. But in the future, hopefully, we’ll have an SDK or some APIs that are opened up, or open source code, once we’re more successfully established for people to experiment and start making their own fun iterations to immerse on.

    Brandel Zachernuk: I have a question about the windows. You mentioned that, when somebody has a pro subscription, they can be socially connected, but not share screens. I presume, in an enterprise circumstance, people can see each other’s windows. Have you observed any ways in which people have used their windows more discursively, in terms of having them as props, essentially, for communicating with each other, rather than primarily, or solely for working on their own? The fact that they can move these monitors, these windows around, does that change anything about the function of them within a workflow or

    a discussion context?

    Gavin Menichini: Yeah. So, to clarify under the tier and your functionality. We have a free tier, where you can connect your computer and traverse the gap. You get one free virtual display. You cannot, on a free tier, ever share screens in all of our public rooms. You can’t share screens, regardless of your license. Here, the only place you can share screens is in a private collaboration room. Which means, you have to be on our elite tier, or a teams tier. On our elite tier, which is our mid-pro-solo tier, you can have up to three other people in the room with you, four total, and you can share screens with each other. And the default is, your screens are never shared. So, if you have four people in a room, and they each have three screens up, you cannot see anyone else’s screen until you voluntarily share your screen and confirm that screen. And then, it will highlight red, for security purposes. But if you’re an environment where, Brandel, you wanted to share your screen, when you share your screen and say, we’re all sitting at a conference room table, if I have my screens like, one, two, three, right here, and I share my middle screen, my screen is then going to pop up in your perspective to you. To where you have control of my shared screen. You can make it larger. Make it bigger. Shrink it, etc. And we’re also going to be building different environment anchors to where say, for example, in your conference room, and in a normal conference room you have a large tv on the wall, say, in virtual reality, you could take your screen and snap it to that place, and once it’s snapped into that little TV slot, that screen will be automatically shared and everyone sees it at that perspective, rather than their own perspective. And then, from a communication standpoint, we have teams who will meet together in different dedicated rooms, and then they’ll share screens, and look at data together. There’s... I can’t remember quite the name, it’s a software development team where something goes down, they have to very well come together. Devops teams come together, they share screens looking at data to fix a down server or something, and they can all see, and analyse that data together. And we’re exploring the different feature adds we can add to make that experience easier and more robust.

    Brandel Zachernuk: And so, yeah. My question is: Are you aware of the ways in which people make use of that in terms of being able to share and show more things? One of the things about desktop computing, even in the context where people are co-located, co-present in physical meet space, you don’t actually have very good performability of computer monitors. It kind of sucks in Zoom. It kind of sucks in real life, as well. Do people show and share differently, as a consequence of being in Immersed? Can you characterize anything about that?

    Gavin Menichini: Yes. So, the answer is yes. They have the ability to share more screens, and so, in meet space, in real-world, a funny term there for meet space, but. You can only

    have one computer screen if you’re working on a laptop, and that’s frustrating. Unless you have a TV, you have to airdrop, XYZ, whatever. But, in Immersed, you have up to five screens. And so, we have teams of four, and they’ll share two or three screens at once, and they can have a whole arrangement of data, 10 screens are being shared, and they can rearrange those individually so it all pops up in front of them, and then, they all rearrange them in order that they want, and they can all watch a huge sharing screen of data. That is not possible in real life because of the technology we provide to them. And then, there’s different iterations of that experience where, maybe, it’s two or three screens, it’s here, it’s there. And so, because of the core tech that we have where you can have multiple screens and then share each of those, that opens up the possibility for more data visualization, because you have more screen real estate. This opportunity to collaborate more effectively, and if you had one computer screen on Zoom, which as you mentioned, is challenging, or even in real life, because in real life you could have a computer and two TVs, but in Immersed you could have eight screens being shared at once.

    Brandel Zachernuk: And do you share control? Is it something where it’s only the person sharing it has the control, so other people would have read-only access? Or do you have the ability for people to be able to pass that control around? Send the user events such that everybody would be able to have shared control?

    Gavin Menichini: So, not right now, but we’re building that out. For the time being, we want everyone just to use collaboration tools they are currently using. Use Google Docs. Use Miro. Use Slack. Whatever. So, the current collaboration documents you guys are using now, we just want to use those applications on Immersed, because whatever you can run on your computer, you can run on your screen in Immersed. It is just your computer in Immersed. So, we tell people to do that. But now they get the added benefit of deeper connection. Just actually to be sitting next to your employee, or your colleague and then, now you can have multiple screens being shared. So, now it’s like a supercharged productivity experience, collaboration experience. Any other questions? I have about four minutes left, so I want to make sure I can answer all the questions you guys have.

    Fabien Benetou: I’ll make a one minute question. I’ll just say faster. If I understood correctly, the primitive is the screen. But is there anything else beyond the screen? Can you share 3D assets? Would the content can be pulled from the screen? If not, can you take capture of the screen. either as image, or video? And is it the whole screen only or part of the screen? And imagining you’ve done that, let’s say, part of the screen as a video of 30 seconds, can you make it permanent in the environment so that if I come back with colleagues tomorrow? Capture? Because that’s the challenge we have here all the time, we have great discussions and then, what happens to the content?

    Gavin Menichini: So, it’s in our pipeline to incorporate other assets that will be able to be brought into Immersed, and then remain persistent in the rooms. So, we’ve created the technology for persistent rooms, meaning, whatever you leave in there, it’s going to stay. Very similar to a conference room that you’ve dedicated for project. You put post notes around the wall, and obviously, come back to it the next day. So there same concept when in VR. And then, we also have plans to incorporate 3D assets, 3D CAD models, et cetera, into Immersed. But because you have your screens and teams are figuring out how to collaborate on 2D screens, we’re just, for the time being, we’re saying just continue to use your CAD model software on your computer 2D. But in the future we’ll have that capability. We also don’t want to be like F3D modelling VR software. So, we’re trying to find that balance.

    Which is why it’s been de-prioritized. But it is coming. And hopefully, in 2022 and then, we have also explored having video files that are in form of screens, or an image file, or post-it notes, We’re also going to improve our whiteboard experience, which is just some of one of our first iterations. And so, there’s a lot of improvements we’re going to be making in the future, in addition to different assets, photos, videos, 3D modelling software, et cetera. We’ve had that request multiple times and plan on building it in the future.

    Fabien Benetou: Oh, and super quick. It means you get in, you do the work, you get out, but you don’t have something like a trace of it as is right now?

    Gavin Menichini: As in persistence? As in you get in, you leave your screens there? Fabien Benetou: Or even something you can extract out of it. Frode was saying that, for example, he gets an email about the time he spent on a session, but is there something else?

    Again, because usually, you have maybe another eureka moment, but you have some kind of

    realization in the space, thanks to the space and the tools. And how can you get that it’s really a struggle.

    Gavin Menichini: I’m not sure, I’m sorry. I’m not sure I’m understanding your question correctly, but well, so it’s...

    Brandel Zachernuk: Maybe I can take a run of it. So, when people play VR games, at a VR arcade, one of the things that people will often produce is a sizzle reel of moments in that action. There’s a replay recording, an artifact of the experience. Of that process.

    Gavin Menichini: Okay, yes. So, for the time being there is no functionality in Immersed for that. But Oculus gives you the ability to record what you’re watching in VR. And you can pull that out and take that experience with you, as well as take snapshots. And then, we have no plans on incorporating that functionality into Immersed because Oculus has it, and I think HTC does, and other hardware manufacturers will provide that recording experience for you to then take away with you.

    Frode Hegland: Thank you very much, Gavin, a very interesting, real-world perspective on a

    very specific issue. So, very grateful. We’ll stay in touch. Run to your next meeting. When this journal issue is out, I’ll send you an update.

    Gavin Menichini: Thank you, Frode. It was a pleasure getting to chat with each of you. God bless. Hope you guys have a great Friday, weekend, and we’ll stay connected.

    Further Discussion

    https://youtu.be/2Nc5COrVw24?t=3987

    Frode Hegland: Oh, okay. That sounds interesting. Yeah, we can look at changing times and stuff. So, briefly on this, and then on the meeting that I had with someone earlier today. This is interesting to us, because they are thinking a lot less VR than we are. But it is a real and commercial company and obviously a lot of his words were very salesy. Which is fine. But it literally is, rectangle in the room. That’s it. So, in many ways, it’s really, phenomenally, useful. And I’m very glad they’re doing it. I’m glad we have a bit of a connection to them now. But the whole issue of taking something out of the screen and putting it somewhere else, it was partly using their system that made me realize that’s not possible. And that’s actually kind of a big deal. So that’s that. And the meeting that Elliot and I had today, he mentioned who it was with. And I didn’t want to put too much into the record on that. But it was really interesting. The meeting was because of Visual-Meta. Elliot introduced us to these people. And Vint. Vint couldn’t be there today. We started a discussion. They have all kinds of issues with Visual-Meta. They love the idea, but then their implementation issue, blah, blah, blah. But towards the end, when I started talking about the Metaverse thing, they had no idea about the problems that we have learned. And they were really invigorated and stressed by it. So, I think what we’re doing here, in this community, is right on. I’m going to try now to rewrite some of the earlier stuff, to write a little piece over the weekend on academic documents in the Metaverse to highlight the issues. And if you guys want to contribute some issues to that document, that would be great or not, depending on how you feel. But I think they really understood that, what I said to them at the end is, if you have a physical meeting of a piece of paper, you can do whatever you want. But in the Metaverse, it can only do with the document, whatever the room allows you to, which is mind-blowingly crazy. And they represent a lot of really big publishers within medicine. They are under the National Institute of Health, as I understand. I’m not sure if Elliot is still in the room. So, yeah. It is good that we are looking in the right areas.

    Brandel Zachernuk: Yeah, that’s really constructive. For my part, one of the things that I’ve realized is that the hypertext people, the people who understand the value of things, like structured writing, and relationship linking, and things like that, are far better positioned than many, possibly most, to understand some of the questions and issues that are intrinsic to the idea of a Metaverse. I was watching, so I linked a podcast to some folks, it’s called, I think is it called Into The Metaverse, but it was a conversation between a VP of Unreal and the and the principal programmer, whatever, architect of Unity. So Vladimir Vukićević, who was who created Unreal and Unity, and Vukićević, I don’t know if I’m garbling that name, he was the

    inventor of WebGL. Which is the foundation for all of the stuff that we do in virtual reality on web, as well as just being very good for being able to do fancy graphics, as I do at work and things like that. But their view of what goes into a Metaverse what needs to be known about entities relationships descriptions and things was just incredibly naive. I’ll link the videos, but they see the idea of a browser as being intrinsic. And another person, who’s a 25-year veteran of Pixar and the inventor of the Universal Scene Description format, USD, which as you may know, Apple is interested in, sort of, promoting as being useful in the form of what this format of choice for augmented reality, quick look files, things like that. And again, just incredible naivete in terms of what are important things to be able to describe with regard to relationships, and constraints, and linkages of the kind that hypertext is. It’s the bread and butter of understanding how to make a hypertext relevant notionally and structurally, in a way that means that it’s (indistinct). So, yeah. It’s exciting, but it’s also distressing to see how much that thinking of people who are really titans of an interactive graphics field don’t know what this medium is. So, that looks fun.

    Frode Hegland: Yeah, it’s scary and fun. But I think we’re very lucky to have Bob here, because I’ve been very about the document and so on, and for about to say, “Well, actually, let’s use the wall as well”. It helps us think about going between spaces. And what I highlighted in the meeting earlier today was, what if I take one document from one repository, and let’s say, it has all the meta, so I’ve put a little bit here, a little bit there, but then, I have another document, from a different repository over here and I draw a connection between them. That connection now is a piece of information too. Where is stored? Who owns it? And how do I interact with that in the future? These are things that are not even begun to be addressed, because I think, all the companies doing the big stuff just want everything to go through their stuff.

    Bob Horn: And what kind is it? That is the connection.

    Frode Hegland: Yeah, exactly. So, we’re early naive days, so we need to produce some interesting worthwhile questions here. Fabien, I see your big yellow hand.

    Video: https://youtu.be/2Nc5COrVw24?t=4369

    Fabien Benetou: I’ll put the less yellow hand on the side. Earlier when I said, I don’t know what I’m doing, it wasn’t like fake modesty or trying to undermine my work or this kind of thing. I actually mean it. I do a bunch of stuff and some of the stuff I do, I hope is interesting. I hope is even new, and might lead to other things. But in practice, it’s not purely random, and there are some let’s say, not heuristic, but there are some design principles, philosophy behind it, understanding of some, hopefully, core principle of urology, or cognitive science, or just engineering. But in practice, I think we have to be humble enough about this being a new medium. And figuring it out is not trivial, it’s not easy, and it’s not, I think, it is part of it,

    is intelligence and knowledge, but a lot of it is all that, plus luck, plus attempting.

    Frode Hegland: Oh, I agree with you. And I see that in this group, the reason I said it was I just wanted him to have a clue of the level of who we are in the room. That’s all. I think our ignorance in this room is great. I saw this graphic when I started studying, I haven’t been able to find the source, but it showed if you know this much about a subject, the circumference is the ignorance, it’s small. The more you know, the bigger circumference it is. And I found that to be such a graphic illustration of, you know something, you don’t know. We need to go all over the place. But at least we’re beginning to see some of the questions. And I think that’s a real contribution of what we’re doing here. So, we just got to keep on going. Also, as you know, we now have two presenters a month, which mean, for the next two or three months, I’ve only signed up one. Brandel is going to be doing, hopefully, in two to three weeks something, right?

    Brandel Zachernuk: Yeah. I’m still chipping away. Then I realized that there’s some reading I need to do, in order to make sure that I’m not mischaracterizing Descartes.

    Frode Hegland: Okay, that sounds like fun. Fabien, would you honour us, as well, with doing a hosted presentation over the next month or two or something?

    Fabien Benetou: Yeah, with pleasure.

    Frode Hegland: Fantastic! Our pathetic little journal is growing slightly less pathetic by the month.

    Fabien Benetou: I can give a teaser on... I don’t have a title yet, but let’s say, how a librarian, what a librarian would do if they were able to move walls around.

    Frode Hegland: That’s very interesting. It was good the one we had on Monday, with Jad. It was completely different from what we’re looking at. Looking at identity. And for you to now talk about that aspect, is kind of a spatial aspect, that’s very interesting.

    Bob Horn: I’m looking forward to whatever you write about this weekend, Frode. Because for me, the summaries of our discussions, with some organization, not anywhere near perfect organization, not asking for that, but some organization, some patterns are what are important to me. And when I find really good bunches of those, then I can visualize them. So, I’m still looking for some sort of expression of levels of where the problems are as we see it now. In other words, there were the, what I heard today, with Immersed, was a set of problems at a certain level, to some degree. And then, a little bit in the organization of knowledge, but not a lot, but that’s what came up in our discussion afterwards and so forth. So, whenever there’s that kind of summary, I really appreciate whatever you do in that regard, because I know it’s the hardest work at this stage. So I’m trying to say something encouraging, I guess.

    Frode Hegland: Yeah, thank you, Bob. That’s very nice. I just put a link on this document

    that I wrote today. The next thing will be, as we discussed. But information has to be somewhere. It’s such an obvious thing, but it doesn’t seem to be acknowledged. Because in a virtual environment, we all know that you watch a Pixar animation, they’ve made every single pixel on the screen. There is no sky even. We know that. But when it becomes interactive, and we move things in and out. Oh, Brandel had a thing there.

    Brandel Zachernuk: One of the things that they that Guido Quaroni talks about, as as well as people have talked a bunch about, some of the influences and contributions of. Quilez makes Shadertoy, I don’t know if you’ve ever seen them or heard of that. But it’s this raymarched based fragment shader system for being able to do procedural systems. And so, none of the moss in brave, if you’ve seen that film, exists. Nobody modeled it. Nobody decided which pieces should go where. What they did was, Quilez has this amazing mind for a completely novel form of representation of data. It’s called the Signed Distance

    Fields raymarched shader. And so it’s all procedural. And all people had to do was navigate through this implicit virtual space to find the pieces that they wanted to stitch into the films. And so, it never existed. It’s something that was conjured on a procedural basis and then people navigated through it. So yes, things have to exist. But that’s not because people make it, sometimes. And sometimes it’s because people make a latent space, and then, they navigate it. And I think that the contrast between those two things is fascinating, in terms of what that means creative tools oblige us to be able to do. Anyway.

    Frode Hegland: Oh, yeah. Absolutely. Like No Man’s Sky and lots of interesting software out there. But it’s still not in the world, so to speak. One thing I still really want, and I’m going to pressure you guys every time, no, it’s not to write your bio, but it is some mechanism where, as an example, our journal, I can put it in a thing so that you guys can put it in your thing. Because then we can really start having real stuff that is our stuff. So if you can keep that in the back of your mind. Even if you can just spec how it should work, I’ll try to find someone to do it, if it’s kind of rote work and not a big framework for you guys.

    Brandel Zachernuk: Yeah, I definitely intend to play more with actually representing text again. And somebody made a sort of invitation slash prompt blast challenge to get my text renderings to be better. Which means that I’ll need something to do it better on. And so, yeah. I think that would be a really interesting target goal.

    Frode Hegland: Awesome. Fabien, I see you have your hand, but on that same request to you guys, imagine we already have some web pages where you can click at the bottom, view in VR, when you’re in the environment. That’s nice. Imagine if we have documents like that, that’ll be amazing. And I don’t know what that would mean, yet. There are some thoughts, but it goes towards the earlier. Okay, yes. Fabien, please?

    Fabien Benetou: Yeah, I think we need to go a bit beyond imagining. Then we can have

    some sandbox, some prototypes of the documents. We have recorded, that’s how I started, the first time I joined, you mentioned Visual-Meta. And then, I put a PDF and some of the media data in there. No matter how the outcome was gonna exist, so I definitely think that’s one of the most interesting way to do it. The quick word on writing, my personal fear about writing is that, I don’t know if you know the concept, and I have the name of the people of my tongue, but yeah, ID Depth. So the idea is that you have too many ideas, and then at some point, if you don’t realize some of them, if you don’t build, implement, make it happen, however the form is, it’s just crushing. And then, let’s say, if I start to write, or prepare for the presentation I mentioned just 30 minutes or 10 minutes ago, the excitement and the problem is, it’s for sure, by summarizing it, stepping back, that’s going to bring new ideas. Like, “Oh, now I need to implement. Now I need to test it”. There is validation on it. I’m just not complaining or anything. Just showing a bit my perspective of my fear of writing. And also because in the past, at some point I did just write. I did not code anything. It felt good in a way. But then also. a lot of it was, I don’t want to say bullshit but, maybe not as interesting as that or it was maybe a little, so I’m just personally trying to find the right balance between summarizing, sharing, having a way that the content can be reused, regardless of the implementation, any implementation. Just sharing my perspective there.

    Frode Hegland: That is a very important perspective. And it is very important to share. And I think we’re all very different in this. And for this particular community, my job as, quote- unquote editor, is to try to create an environment where we’re comfortable with different levels. Like Adam, he will not write. Fine. I steal from Twitter, put it in the journal, and he approves it. Hopefully. Well, so far he has. So, if you want to write, write. But also, I really share, so strongly, the mental thing you talked about. We can’t know what it’s like to hear something until it exists. And we say, if an idea is important write it down, because writing it down, of course, helps clarifying. But that’s only if it’s that kind of an idea. Implementing, in demos and code is as important. I’ve been lucky enough to be involved with building our summer house, in Norway, doing a renovation here. And because it’s a physical environment, even doing it in SketchUp it’s not enough. I made many mistakes. Thankfully, there were experienced people who could help me see it in the real thing. Sometimes we had to put boards up in a room to see what it would feel like. So, yeah. Our imaginations are hugely constrained. So, it’s now 19 past. And Brandel was suggesting he had to go somewhere else. I think it’s okay, with a small group, if we finish half-past, considering this will be transcribed, anyway. And so, let’s have a good weekend. Unless someone wants a further topic discussion, which I’m totally happy with also.

    Brandel Zachernuk: Yeah. I’m looking forward to chatting on Monday. And I will read through what you sent to the group that you discussed things with today. Connecting to

    people with problems that are more than graphical, and more than attends to the Metaverse, I think is really fascinating. Providing they have the imagination to be able to see that, what they are talking about is a “Docuverse”. Is these sort of connected concepts that Bob has written about. I’ve got a book but it’s on the coffee table. The pages after 244. The characterization of the actual information and decision spaces that you have. It’s got the person with the HMD but then it’s sort of situated in an organization where there are flows of decisions. And I think that, recognizing that we can do work on that is fascinating.

    Bob Horn: I can send that to everybody, if you like.

    Frode Hegland: Oh, I have it. So without naming names or exactly who I was speaking to today since we’re still recording. The interesting thing is, of course, this feeds the, starting with the Visual-Meta, it feeds into some part of the organization desperately wants something like that and they’ve been pushing for years. But there are resources, and organization, and communication, all those real-world issues. So then, a huge problem is, I come in as an outsider and I say, “Hey, here’s a solution. It’s really cheap and simple”. It’s kind of like I’m stealing their thunder, right? I am not doing that, I’m just trying to help them realize what they already want to do. And today, when they talked about different standards, I said, “Look. Honestly, what’s in Visual-Meta, I don’t care. If you could, please, put it in BibTeX, the basic stuff, but if you want to have some json in there, it’s not something I would like, but if you want to do it there’s nothing wrong with that”. So, to try to make these people feel that they are being enabled, rather than someone kind of moving them along is emotionally, human difficult. And also, for them to feel that they’re doing something with Vint Cerf. All of that, hopefully, will help them feel a bit of excitement. But I also think that the incredibly hard issues with the Metaverse that we’re bringing up also unlock something in their imagination. Because, imagine if we, at the end of this year, we have a demo, where we have a printed document, and then we pretend to do OCR, we don’t need to do it live, right? And then, we have it on the computer, very nice. And now, suddenly, we put on a headset. You all know where I’m going with this, right? We have that thing. But then, as the crucial question you kept asking Gavin, and I’m glad you both asked it, Fabien and Brandel, what happens to the room when you leave it? What happens to the artifacts and the relationship if we solve some of that? What an incredibly strong demo that would be. And also, was it a little bit of a wake- up call for you guys to see that this well-funded new company is still dealing with only rectangles?

    Brandel Zachernuk: No. I know from my own internal experience just how coarse the thinking is, even with better funding.

    Frode Hegland: Yeah. And the greatest thing about our group is, we have zero funding. And we have zero bosses. All we have is our honesty, community, and passion. Now, it’s a very

    different place to invent from. But look at all the great inventions. Vint was a graduate student, Tim Berners-Lee was trying to do something in a different lab. You know all the stories. Great innovations have to come from groups like this. I don’t know if we’re going to invent something. I don’t know. I don’t really care. But I really do care, desperately, that we contribute to the dialogue.

    Brandel Zachernuk: Yeah, I think that’s valuable. I think that the fact that we have your perspective on visual forms of important distilled information thought is going to be really valuable. And one of the things I’d like to do, given that you said that so many people make use of Vision 2050 is start with that as a sculpture, as a system to be able to jump into further detail. Do you have more on that one?

    Bob Horn: Well, I can take it apart. I can do what different things we want to do with it. For example, when we were clearing it with the team that worked that created some of the thought that went into it, the back cast thought, I would send the long trail of the four decades of transportation to Boeing, to Volkswagen, and to Toyota. I didn’t send it to the rest of the people. So, I could take that, I actually took that out and sent a PDF of that, only that to them. And that’s one dimension. Another dimension is that five years later, I worked on another project that was similar called Poll Free. Which is also on my website. And it narrowed the focus to Europe, to the European Union, rather than the whole world. But the structure is similar in many ways. So each one of those are extractable. Then also, I have a few... The two or three years after working on the Vision 2050, I would give lectures of different kinds. And people would ask me, “Well, how are we doing on this or that requirement?” And so, I would try to pull up whatever data there was, two, or three, or four years later, and put that in my slides, so there, that material is available. So, that we can extract, you could demo, at least that, “Here’s what we thought in 2010 and here’s what it looked like in 2014”. For one small chunk of the whole picture. So, yeah. And I have several, maybe I don’t know, six or eight, at least of those, that where I could find data easily and fast. So, there’s a bit of demo material there that one could portray a different kind of a landscape than the one that you were pointed out just a minute ago.

    Brandel Zachernuk: Yeah. That would be really interesting to play with. I was just looking to add some of the things. I think that the one thing that I had seen of the Vision 2050 was the fairly simple one, it’s a sort of a four, this node graph here, the nine billion people live well and within the limits of the planet I hadn’t seen yet. The sustainable pathway toward a sustainable 2050 document that you linked here on your site, which has a ton more information. And, yeah. One of the things that I’m curious about, one of the things that I think I will do to play with it first is actually get it into, not into a program that I write, but into a 3D modelling APP, to tear it apart, and think about the way in which we might be able

    to create and distribute space for it. But first, do you have thoughts about what you would do if this was an entire room? It obviously needs to be a pretty big mural, but if it was an entire room, or an entire building, do you have a sense of the way in which it would differ?

    Bob Horn: Until you ask the question, and put it together with the pages from the old book, I haven’t really thought of that. But from many of the places in Vision 2050 one would have pathways like this. This was originally a pert chart way back when that I was visualizing, because I happened to have, early my career edited a book on pert charts for Dupont. And so, that’s a really intriguing question. To be extracting in and laying it out and then, connecting those and also flipping the big mural, the time-based mural in Vision 2050, making that flat, bringing different parts of it up, I think would be one of the first ways that one would try to explore that, because then, one could (indistinct) pathways, and alternatives, and then linkages. So, they’re different. Depending on one’s purpose, thinking purpose, one would do different things.

    Fabien Benetou: Brief note here. I believe, using Illustrator to make the visuals, I believe Illustrator can also save to SVG. And SVG then can be relatively easily extruded to transform a 2D shape into a 3D shape. Honestly, doing that would be probably interesting but very basic, or very naive. It’s still, I think, a good step to extrude part of the graph with different depth based on, I don’t know, colour, or meaning, or position, or something like this. So, I think it could be done. But, if you could export one of the poster in that format, in SVG, I think it would be fun to tinker with. But I think, at some point, you personally will have to consider, indeed, the question that Brandel asked. If you have a room, rather than a wall beyond the automatic extraction or extrusion, how would you design it?

    Brandel Zachernuk: Yeah. It’s something that I think would be really useful as an exercise, if you want to go through one of those murals and with a sketchbook, just pencils. And at some point, you can go through with us to characterize what I think, like you said, different shapes, different jobs call for different shapes through that space. But one can move space around, which is exciting. Librarians can move their walls around.

    Bob Horn: I was going to say the other, if you strike another core, just as from the demonstration we saw earlier this morning. The big mural could be on one wall. There was a written report. There is a 60 or 80-page report that could be linked in various ways to it. And it exists. And then, there’s also, in that report, there’s a simplification of the big mural. It reduces the 800 steps in the mural to about 40. And it’s a visual table look. So, already there are three views, three walls, and we’ve already imagined putting it flat on the floor and things popping up from it. All right, there we go. There’s a room for you.

    Brandel Zachernuk: Exciting, yeah. I think that’s a really good start. And from my perspective, I think that’s something that I can and will play with is, starting from that JPEG

    of the PDF, I’ll peel pieces of that off and try to arrange them in space, thinking about some of the stuff that Fabien’s done with the Visual-Meta, virtual Visual-Meta. As well as what Adam succeeded in doing, in terms of pulling the dates off, because I think that there’s some really interesting duality of views, like multiplicity of representations that we can kind of get into, as well as being able to leverage the idea of having vastly different scales. When you have a, at Apple we call it a type matrix, but just the texts and what what’s a heading what’s a subhead. But the thing is that, except in the most egregious cases, which we sometimes do at Apple, the biggest text is no more than about five times the smallest text. But in real space you can have a museum, and the letters on the museum wall or in a big room are this big.

    And then you have little blocks like that thing. And there’s no expectation for there to be mutually intelligible. There’s no way you can read this, while you’re reading that. But because of the fact that we have the ability to navigate that space, we can make use of those incredibly disparate scales. And I think that’s incumbent on us to reimagine what we would do with those vastly different scales that we have available, as a result of being able to locomote through a virtual space.

    Bob Horn: Well, let me know if you need any of these things. I can provide, somehow. I guess you and I could figure out how to do a dropbox for Illustrator or any other thing that can be useful for you.

    Brandel Zachernuk: Yeah, thank you. I may ask for the Illustrator document. One of the things that I’ve been recently inspired by, so there’s an incredible team at Apple that I’m trying to apply for called prototyping. And one of the neat things that they have done over the years is describe their prototypic process. And it mostly involves cutting JPEGs apart and throwing them into the roughest thing possible in order to be able to answer the coarsest questions possible first. And so, I’m very much looking forward to doing something coarse ground with the expectation that we have a better sense of what it is we would want to do with more high fidelity resources. So, hopefully that will bear fruit and nobody should be, hopefully not, too distraught by misuse of the material. But I very much enjoy the idea of taking a fairly rough hand to these broad questions at first, and then, making sure that refinement is based on actual resolution, in the sense of being resolved, rather than pixel density.

    Bob Horn: Yeah, well, okay. If you want JPEGs we can make JPEGs too.

    Frode Hegland: You said almost as a throwaway thing there. Traverse. But one thing that I learned, Brandel, particularly with your first mural of Bob’s work is that, traversal, unless you’re physically walking if you have room scale opportunity, is horrible. But being able to pull and push is wonderful. And I think that kind of insight that we’re learning by doing is something we really should try to record. So, I’m not trying to push you into an article. But if

    you have a few bullets that you want to put into Twitter, or sent to me, or whatever, as in, this, in your experience has caused stomach pain, this hasn’t. Because also, yesterday, I saw a...

    You know I come from a visual background, and have photography friends, and do videos, and all that stuff, suddenly, a friend of mine, Keith, from some of you have met, we were in SoHo, where he put a 8k 360 camera, and it was really fun. So, I got all excited, went home, looked up a few things, and then I found the Stereo 180 cameras. And I finally found a way to view it on the Oculus. It was a bit clunky, but I did. It was an awful experience. There’s something about where you place your eye. When we saw the movie, Avatar, it was really weird that the bit that is blurry would actually be sharp as well, but somewhere else. Those kinds of effects. So, to have a stereoscopic, if it isn’t exactly right on both eyes and you’re looking at the exact, it’s horrible. So, these are the things we’re learning. And if we could put it into a more listy way, that would be great. Anyway, just since you mentioned.

    Brandel Zachernuk: Yes. It’s fascinating. And that’s something that Mark Anderson also observed when he realized that, unfortunately, the Fresnel lenses that we make use of in current generation hardware means that, it’s not particularly amenable to looking with your eyes like that. You really have to be looking through the center of your headset in order to be able to get the best view. You have this sense of the periphery. But will tire anybody who tries to read stuff down there, because their eyes are going to start hurting.

    Frode Hegland: Yeah. I still have problems getting a real good sharp focus. Jiggle this, jiggle that. But, hey! Early days, right? So when it comes to what we’re talking about with Bob’s mural, and the levels, and the connections, and all of that good stuff, it seems to be an incredibly useful thing to experiment with exactly these issues. What does it actually mean to explode it, et cetera? So, yeah. Very good.

    Fabien Benetou: Yeah. I imagine that being shared before. But just in case, Mike Elgier, who is, or at least who was, I’m not sure right now, but a typist and designer at Google, on the UXL product. Wrote some design principle a couple of years ago. And not all of these were his, but he illustrated it quite nicely. So, I think it’s a good summary.

    Brandel Zachernuk: Yes, I agree. He’s still at Google he was working on Earth and YouTube. Working on how to present media, and make sure that it works seamlessly so that you’re not lying about what the media is, but in terms of presenting a YouTube video in VR in a way that it isn’t with no applied and like I see it screen or whatever. But also, making sure that it’s something that you can interact with as seamlessly as possible. So, it’s nice work, and hopefully, if Google ramps up its work back into AR, VR, then they can leverage his abilities. Because they’ve lost a lot of people who are doing really interesting things. I don’t know if you saw, Don McCarthy has now moved to New York Times to work on 3D stuff there. And that’s very exciting for them. But a huge blow for Google not to have them

    back.

    Frode Hegland: Just adding this to our little news thing. Right. Excellent. Yeah. Let’s reconvene on Monday. This is good. And, yeah. That’s all just wonderful. Have a good weekend.

    Chat Log

    16:46:14 From Fabien Benetou : my DIY keyboard passthrough in Hubs ;) https://twitter.com/utopiah/status/1250121506782355456

    using my webcam desktop

    16:48:25 From Frode Hegland : Cool Fabien

    16:50:49 From alanlaidlaw : that’s the right call. APIs are very dangerous in highly dynamic domains

    16:51:47 From Fabien Benetou : also recent demo on managing screens in Hubs https://twitter.com/utopiah/status/1493315471252283398 including capturing images to move them around while streaming content

    17:03:43 From Fabien Benetou : good point, the limits of the natural metaphor, unable to get the same affordances one does have with “just””paper

    17:04:07 From Frode Hegland : Carmack? 17:04:16 From Frode Hegland : Oh that was Quake

    17:04:48 From Frode Hegland : Can you put the names here in chat as well please? 17:05:16 From Fabien Benetou : Vladimir Vukićević iirc

    17:05:53 From Frode Hegland : Thanks

    17:06:40 From Brandel Zachernuk : This is Vukićević: https://cesium.com/open-metaverse-podcast/3d-on-the-web/

    17:07:17 From Brandel Zachernuk : And Pixar/Adobe, Guido Quaroni: https://cesium.com/open-metaverse-podcast/the-genesis-of-usd/ 17:11:09 From Frode Hegland : From today to the NIH:

    https://www.dropbox.com/s/9xyl6xgmaltojqn/metadata%20in%20crisis.pdf?dl=0 17:11:25 From Frode Hegland : Next will be on academic documents in VR 17:12:07 From Fabien Benetou : very basic but the documents used in https://twitter.com/utopiah/status/1243495288289050624 are academic papers 17:13:19 From Frode Hegland : Fabien, make an article on that tweet?… 17:13:30 From Fabien Benetou : length? deadline?

    17:13:34 From Frode Hegland : any

    17:13:44 From Frode Hegland : However, do not over work!

    17:13:54 From Frode Hegland : Simple but don’t waste time editing down 17:14:07 From Fabien Benetou : sure, will do

    17:14:11 From Frode Hegland : Wonderful

    17:14:52 From Fabien Benetou : (off topic but I can recommend https://podcasts.apple.com/be/podcast/burnout-and-how-to-avoid-it/id1474245040? i=1000551538495

    on burn out)

    17:28:05 From Brandel Zachernuk :

    https://www.bobhorn.us/assets/sus-5uc-vision-2050-wbcsd-2010-(1).pdf 17:28:17 From Brandel Zachernuk :

    https:// www.bobhorn.us/assets/sus-6uc-pathwayswbcsd-final-2010.jpg 17:39:10 From Fabien Benetou : https://www.mikealger.com/

    17:39:27 From Fabien Benetou : design principles for UX in XR, pretty popular

    Harold Thimbleby

    Getting mixed text right is the future of text

    When we read text, at least text that we are enjoying as we read it, we get immersed in it, and it becomes like a stream of consciousness we willingly join in with. We lose awareness of the magic reading skills that took us years to learn — these marks on screen or paper somehow create mental images or sounds, feelings like laughter, disagreement, anger, plans for action, anything, in our heads. If we pause from the flow, we may reflect about the text’s metadata

    — who wrote this; when did they write it; how much do we have to pay for it; when was it written? — we want to know lots details about the text.

    If we are feeling critical, we may notice the typography: some text is italic, the page numbers are in a different font, there are rivers in the paragraphs, and the kerning perhaps leaves a lot to be desired. Then we notice how the author italicises Latin phrases, like ad nauseam, but does not italicise Latin abbreviations like e.g. for example.

    If we are programmers, we might wonder how the text works, how it was actually implemented. What is the data format? How did the writer and the developers store this information, and yet convey a coherent stream of consciousness to the readers? Some texts mix in computed texts, like indices and tables of contents; then there are footnotes, side notes, cross references, running headings, page numbers — all conventional ways of mixing in different types of text to help the reader.

    If the text is on a web page or represented in VR, even more will be happening. VR text is typically interactive. Perhaps it scrolls and pans in interesting ways, is reactive to different sorts of reading devices, fitting into different screen sizes and colour gamuts, and it probably interactively needs information from the reader. Increasingly, the reader will need to subscribe to the text, and the details of that are held in very complex metadata stored in the cloud, far away from the text itself yet linked back to it so the reader can have access to it.

    The author’s experience of text

    For the sake of concreteness, familiarity, and simplicity, we will use HTML as an initial case study.

    HTML is a familiar, well-defined notation, and it is powerful enough to represent almost any form of text. For example, Microsoft Word — which provides a WYSIWYG experience

    for the author — could easily represent all of its text using HTML; in fact, Word now uses a version of XML (which is basically a fussy version of HTML) to do so. Furthermore, in this chapter it’s helpful that we can talk about HTML on the two-dimensional printed (or PDF or screen) page, unlike examples from VR. (If we had used Microsoft Word as the running example, it has plenty of mixed texts, like tables of contents, references, forms. Even basic features like tables and lists are very different sorts of text than the main document text.)

    Despite the widespread use of HTML across the web, and its widespread use in highly critical applications, such as managing bank accounts and healthcare services and writing pilot operating manuals for aircraft, HTML is a surprisingly quirky and unreliable language for text. The main reason for its quirkiness is that HTML was originally designed to implement some innovative ideas about distributed hypertext, and nobody then thought it would develop to need designing to be safe to use in critical applications, let alone that it would need designing to integrate reliably with many other notations.

    We’ll give some examples. If you get bored with the details, do skip forward to the end of this chapter to see what needs to be learned to improve future mixed text.

    Remember these examples illustrate problems that can occur when any text mixes any notations, but using HTML makes it easy to describe. (Also, you can easily play with my examples in any web browser.) We’ll take very simple examples of mixed text, not least to wonder why even simple mixes don’t work perfectly. For brevity, we’ll ignore the complexities and flaws of mixed texts like tables of contents, indices, and so on (there aren’t many word processors that ensure even just the table of contents has the right page numbers all the time).

    In addition to the text, styles and layout HTML can define, HTML allows developers to mix comments in the text. Comments are texts that are intended to be read by developers but not seen by readers. Perhaps a developer is in a hurry for people to read a text but they haven’t yet completely finished it. How will the developer keep track of what they want to write but haven’t yet done? One easy solution is to use a comment: the developer writes a comment like “XX I need to finish writing this section by December” or “I need to check this! What’s the citation?” or “I must add the URL later”, but the readers of the text won’t see these private comments. The developer, as here, might use a code like XX so that they can easily use search facilities to find their important comments where they need to do more work.

    The actual notation for comment in HTML is <!-- comment -->. Here, I’ve used another mixture of texts: the italic typewriter font word comment (in the previous sentence) is being used to mean any text that is used as comment and hence will not be visible to the text’s reader.

    One problem with this HTML notation is that it is not possible to comment out arbitrary HTML: if it already contains comments, where the commented out HTML will end with the first -->, not with the last.

    Why would you want to comment out entire blocks of HTML, which might contain further comments? A very common reason to do this is that the HTML text is not working properly: there is some sort of bug in the text. One of the fastest ways of finding the cause of the problem is to systematically comment out chunks of the text. If commenting out this bit doesn’t affect the bug, the bug must be somewhere else. Try again, and continue doing this until the bug is precisely located. (There are systematic ways to do this that speed up the debugging, like binary search.)

    HTML is structured using tags. A simple tag is <p>, which generally starts a paragraph. Tags can also have parameters (HTML calls them attributes) to provide more specific control over their meaning or features. For example, <p title = "This paragraph is about HTML"> typically makes the specified title text appear when the user mouses over the paragraph. The spaces in this title mean that it has to be written between two quote symbols (the two " characters) — otherwise the four words here after the first, paragraph, is, about and HTML, would be taken as further attributes; the title would just be set to This, and all the other words would be silent errors. However, we obviously want the entire text to be a single value made up of all the words and spaces between them. Unfortunately what is obvious to us is not obvious to HTML. HTML has to cope with many authors’ ideas that are not obvious, most of which won’t be so obvious to us, so it needs another feature to avoid it having to somehow intuit what we think we mean. So, sometimes, but not always, we have to use " around attribute values.

    Unfortunately, using " around attribute values means that yet another random convention is needed if we need " itself to be part of a value.

    For example,

    <h1 title = "This is the beginning of the book "The Hobbit"">

    does not work. Instead, the HTML author is required to use a single quote instead. Here, this would do:

    <h1 title = 'This is the beginning of the book "The Hobbit"'>

    — which solves that problem, but now we are in a mess if for any reason we need both sorts of quote. So, what about the title of a book about a book?

    <h1 title = "J. R. R. Tolkien's "The Hobbit"">

    which needs to use both " and ' in the attribute value! HTML cannot do that, at least without relying on even more conventions: for instance, knowing that any character in HTML can be

    written as &code; we could correctly but tediously write

    <h1 title="J. R. R. Tolkien&#39;s &#34;The Hobbit&#34;">

    This is just bonkers isn&#39;t it? It relies on the author knowing what numeric codes (or names) need to be used for the problematic characters, and also relies on the author testing that it works.

    Other languages use a different, much better, system to allow authors to mix types of text. For instance in the widely-used programming language C, within a value like "stuff", characters can be represented by themselves, or more generally codes, after a slash. Thus \' means ', \" means ", and more generally \nnn means the character with code nnn like HTML’s own &#nn; but using octal rather than decimal. This approach means in C one could write a value for a book title like

    title = "J. R. R. Tolkien\'s \"The Hobbit\"";

    and it would work as intended — and it is much easier for the author to read and write. Note that the \' is being used correctly even though in this case a bare ' alone, without a slash, would have been equally acceptable too. So one must ask: given this nicer design of C, and nicer design or lots of similar, popular, textual languages which pre-dated HTML, why did HTML use a scheme that is so awkward?

    Note that a scheme like HTML’s that is sometimes rather than always awkward means that authors are rarely familiar with the rare problems. The problems come as surprises.

    HTML gets worse.

    HTML has ways to introduce further types of text, such as CSS, SVG, MathML, and JavaScript. For example, <script> document.write(27*39); </script> is JavaScript mixed inside of HTML text. Here the JavaScript is being used to work out a sum (namely, 27 times 39) that the author found easier to write down in JavaScript than work out in their head.

    Moreover, JavaScript is often used inside HTML to generate CSS and SVG and other languages (such as SQL, which we will return to below).

    What an author can write in JavaScript has many very unusual constraints. Consider this simple example:

    <script> var endScript = "</script>"; </script>

    This will not work, because HTML finishes the JavaScript prematurely at the first </script> rather than the second one. HTML does not recognise JavaScript’s syntax, so it has no idea that the first </script> is inside a string in JavaScript and was not intended to be HTML at that moment, which the second one was.

    The workaround for this is a bit bizarre: HTML’s & entities can be used to disguise the <> characters from HTML! Here’s how it can be done:

    <script> var endScript = "&lt;/script&gt;"; </script>

    I think we get so used to this sort of workaround, we lose sight of how odd it is to have to understand how two languages, here HTML and JavaScript, mess each other up before we can safely use either of them

    Here, next, is some routine JavaScript that displays an alert for the developer if (in this case) x>y, which might mean something has gone wrong:

    <script> if( x > y ) alert("--> x > y"); </script>

    Assume the author, or another author working on the same text, decided to comment out a stretch of HTML for some reason. Weirdly, this JavaScript will now produce the text “x > y"); -->”, because the ‘harmless’ arrow in the JavaScript code has turned into HTML’s --> end of comment symbol, even though it is still inside JavaScript. Confusingly, the JavaScript used to work before it was commented out!

    Ironically, because HTML is designed to ignore errors, when it is mixed with JavaScript, as here, authors may make serious errors (much worse than this simple example) that are ignored and which nothing helps them detect. In complex projects, especially with multiple authors sharing the same texts, such errors are soon impossible to avoid, and are very hard to track down and fix because they are caused by strange interactions between incompatible text notations. They aren’t errors in HTML; they aren’t errors in JavaScript; they are errors that only arise inside JavaScript inside HTML text.

    Here’s another confusion. Like HTML, JavaScript itself has comments. Thus, in Javascript, anything written after // to the end of the line is ignored. But // </script> is a JavaScript comment ignored by JavaScript but includes valid HTML that is not ignored by HTML.

    To summarise so far: HTML is a text notation that allows, indeed encourages and relies on, other languages (such as JavaScript) being mixed in, but HTML and these languages were developed independently, and they interact in weird and unexpected ways that can catch authors and readers out.

    These examples, chosen to be quick and easy to explain, may give the misleading impression that the problems are trivial. They may also, wrongly, give the impression that mixed text problems are restricted to HTML. But it gets worse.

    An HTML text may use JavaScript that needs to use the language SQL, a popular database language. The problem is that when SQL is embedded in JavaScript in HTML, it raises security risks. “SQL injection” is the most familiar problem.

    A user using an HTML text on a web page may be asked to enter some text, like some product they want to buy. The product needs to be found in the store’s database, so SQL is

    used to make the connection. But if, instead of a product description, they type a bit of valid SQL, this SQL will go straight to the SQL engine. This is the SQL injection, and then the user (presumably a hacker) can get the SQL backend to do bad things.

    If a web site allows (by accident and ignorance) SQL injection, a hacker can do much damage by taking over and programming the SQL database. In addition to this problem, SQL has its own different weird rules for strings and mixing texts, making examples like the simple HTML+JavaScript problems look simple. To make matters worse, an SQL database may well store HTML and JavaScript, for instance to make nice descriptions of the products the store sells. So mixed text can mix text.

    Hackers can have fun with the bugs. There was a UK company registered under the name DROP TABLE "COMPANIES";—LTD, a company name that is contrived to be valid SQL. If injected into a database with a table called companies it would drop (that is, delete) the company’s data.

    Interesting aside…

    We’ve mentioned comments, and shown how they can be useful for authors of texts. HTML also allows text to be optionally hidden or made visible to readers, a sort of generalisation of comments but available to both authors and readers. This feature is the hidden attribute. Thus

    <span>Hello</span> says hello, but <span hidden>Hello</span> says nothing at all for the reader, a little bit like <!-- hello --> would too. Ironically, to do anything useful, like allowing text — maybe an error message — to appear only when it is needed requires using JavaScript to dynamically edit HTML attributes (here, to interactively disable or enable hidden).

    Mixed texts in single systems

    Instead of mixing two text systems, like HTML and JavaScript, it ought to be easier to use a single integrated system. I’ve already hinted that there is more to the mixing of single-system texts like mixing in tables of contents into documents, but let’s stick with “trivial” mixing — because even that goes awry (and its weirdness is easier to explain briefly).

    I wrote this chapter using Microsoft Word. For the examples in HTML, I copied and pasted the text in and out of this chapter into a web browser, ran the text, and double-checked it did what I said it did. As I improved my discussion of the examples, text went backwards and forwards — hopefully without introducing errors or dropping off details, like the last > character in a bodged cut-and-paste. It would have been easier and more reliable had I used

    an integrated mixed text system like Mathematica, then the entire text could have been authored in one place and could have stayed in place without any cut-and-pastes.

    In HTML if I say “<hr> is a horizontal rule,” then I have already used up the four letters

    <hr> to display themselves, namely as <, h, r, and >. (The fact that I actually had to write &lt;hr&gt; is another HTML mixed text problem.) In HTML I can’t reuse the same text to show what this <hr> does. However since Mathematica is programmable, I can write <hr> once and get it displayed numerous times, and each time processed in any way I like: sometimes to see the specific characters, sometimes to see how it renders (for instance as it would in HTML, as a horizontal rule), and sometimes to do arbitrary things. How many characters is it? 4. And if I changed the <hr> to, say, <hr style = "width: 50%; height: 1cm">, that 4 would change to the correct value of 38 without me doing anything.

    While Mathematica is an example of a sophisticated system originally designed for mixing text with mathematics, it still has text-mixing design flaws. For example, a Mathematica feature for embedding text inside text — exactly what this chapter is about — is called a string template in its terminology. String templates use the notation <* … *> to indicate a place to mix arbitrary Mathematica text into strings of otherwise ordinary text, using <* … *> a bit like HTML’s own <script> … </script> notation.

    For example, here is a single line easily written in Mathematica:

    “The value of π is <* N[4ArcTan[1]] *>” turns into “The value of π is 3.14159” Very nice, but how would you write a string template that explains how to insert

    Mathematica text? You’d want to do this because using string templates to explain string

    templates would ensure the explanations were exactly correct. Indeed, Mathematica comes with a comprehensive user manual written as a Mathematica text, which does exactly this to illustrate how all its features work. Unfortunately, you can’t document string templates so easily (without complex and arbitrary workarounds). If I had written the example above entirely in Mathematica, the first <*, which you are supposed to read as showing how to use the mixed text feature, would already have been expanded, so the example wouldn’t work at all. “The value of π is 3.14159” turns into “The value of π is 3.14159” doesn’t say anything helpful!

    Mathematica allows you to write special characters from other texts explicitly. Thus the Greek (or Unicode) symbol \[pi] written in ordinary text can be used to mean π itself. If they had thought of having \[Less], which they don’t, then the <* problem would have been fixed. Yet they have LessEqual, for ≤, and lots more symbols. The omissions, like having no abbreviation Less, are arbitrary, even when they are needed, because Mathematica itself made

    < a special character! The designers of systems like HTML and Mathematica don’t seem to realise that a simple feature needs checking off for compatibility right across the language —

    when string templates were introduced in Version 10.0 of Mathematica, evidently nobody thought to go back over the basic text notations introduced in Version 1.

    There are various workarounds of course, which perhaps experienced Mathematica users will be shouting at me. Ordinarily, though, an author of a text won’t realise workarounds are needed until after something unexpected goes wrong, then they have to waste time trying to find the problem, then find an ad hoc solution using tricks they have to work out for themselves. Remember, “experienced” authors are just those who have already come across and overcome these “trivial” problems. String templates are clever, but suddenly what was supposed to be empowering mixed text feature has turned into a slippery, wiggling eel.

    We should not admire experienced authors who know all the problems and workarounds for mixed text. We should be despairing at the people who design mixed systems that don’t work reliably together.

    Future text mixed with AI and …

    This chapter has discussed the unavoidable need for interleaved mixed text, so text can fulfill its many purposes — whether for authors or readers. It showed (mostly by way of HTML- based examples) that many practical problems remain. Mixing text leverages enormous versatility, but at the cost of complexity. The devil is in the details.

    We hinted that embedded languages like JavaScript can be used to help the author add power and features to text to enrich the readers’ experience. The example we gave was simple, but made the point: if the author does not know what 27 times 39 is, they can get JavaScript to work it out and insert the answer. Another example would be to display the date

    — JavaScript knows that even if the author doesn’t. These are simple examples of mixed text that build on computational features.

    The world of computation is rapidly expanding in scope and impact with new tools.

    Examples that can transform the author’s experience of writing include such AI tools as https://www.gomoonbeam.com

    https://elicit.org

    https://lex.page, and more.

    These fascinating AI tools can do research, can do writing, and can inspire people out of writer’s block. There are surprisingly many such tools, leveraging every gap imaginable in the writing and reading process. We are still learning how AI can help, and every way it helps relies on mixing in more forms of text together — they didn’t mix, then they would not be

    contributing directly to the text or the author’s work.

    A final example is the use of programmable systems like Mathematica and R, which can mix text and computation and AI, as well as access curated databases of all manner of sources that can help the author. Unlike normal AI systems that are generally packaged up to do one thing well, Mathematica and R can be programmed by the author to help in any way.

    Mathematica, for instance, not only includes AI and ML and lots more, but can draw a map of Africa, get the country names and boundaries right and up to date, and find out all other details, like the weather in Sudan, its GDP or its adult literacy, even for very the day the reader reads about it, and mix it all in to the text the author is writing. Indeed, research papers often require detailed computations, often involving statistics, and doing this reliably mixed in the text, as Mathematica can, makes the papers much more reliable than when the computations being done conventionally — that is, done elsewhere and manually copied-and- pasted into the text, often introducing typos and other errors, as well as raising problems of the author forgetting to update the statistics when something relevant in the paper is updated. Consistency is a problem best solved by computers doing the text mixing.

    Conclusions

    The future of text requires and cannot avoid mixing different sorts of text. We already interleave all sorts of text without thinking and often without problems. Occasionally, however, things get tricky. When we use internet technologies to leverage our mixed texts, they can be read and used by millions of people. This means that what seem like arcane tricky things to us and of no real importance can happen to hundreds or thousands of people, and can have dire consequences for them.

    Unfortunately, mixing different types of text is a mess. Text has become very powerful thanks to computers and computation; but text has also become unreliable thanks to the poor design and inconsistencies between different types of text. We gave examples of the mess of HTML and JavaScript being mixed, and examples of mixed text problems within the single Mathematica application.

    Developers keep adding new types of text to representations, historically HTML being a notable example, that were never intended to be extended so far as they have been. And each new type of text (CSS, MathML, etc) has to work with other and all previous types of text that did not anticipate it — to say nothing of the complexities of backwards compatibility with earlier versions of each type of text. The Catch-22 of “improving” the design of text often means compromising lots of text authored before the design was improved.

    Special cases routinely fail, and workarounds are complex and fragile. In a saner world, HTML, JavaScript, SQL, and all the other languages would have been designed to work closely and better together, with no need for author workarounds.

    It’s maybe too late to start again, but here are a few ideas that may help:

    This chapter discussed a problem that is more generally called feature interaction. That is, texts have features, but in mixed texts the otherwise desirable features of each text interact in unhelpful and unexpected ways. In general, there are no good solutions to feature interaction, other than taking care to avoid it in the first place and providing mechanisms to help detect it (even block it) before any downstream reader is confused. In healthcare, the problem would be seen as a failure of the problem called interoperability, a potentially lethal problem that undermines the reliability of the mixed texts of patient records.

    If we are going to have feature interaction, which we are, we should take all steps to minimise it, and design the amazing powerful things mixed texts can do to eclipse their problems.

    http://www.harold.thimbleby.net

    Jamie Joyce

    Guest Presentation : The Society Library

    https://youtu.be/Puc5vzwp8IQ

    In case any of you don’t know who we are, we are The Society Library.

    We’re non-profit, a collective intelligence non-profit. I’m going to start this presentation just by talking about who we are and what we do. Then I’m actually going to show you what we're up to. And I’d love to store some of your feedback and some of your ideas because some of you have been thinking about these types of projects for decades, and I’m only three years in. I have been thinking about it for about seven to ten, but I’m only three years in, in terms of implementing these things. So I’d love to get feedback and to hear how you think we could grow and expand what we do.

    We’re The Society Library, the main projects that we’re working on are essentially:

    There’s all these wicked issues that we ca