Navigating Tongues

Client

Content

Services

Content




Industry

Content


Year

2022

Credits & Tool

Content

TL;DR.

“How do we interact with the ‘other’ through multifarious tongues:
using our own multi-sensory / multi-modal methods?”

The project is an archive of experiments that translates memories from personal letters into AI-generated visuals, shared through a tactile graphic, book and a linked website.

The prompts that are used to generate these images also function as alt text, showing that using different “tongues” together is integral to cultivating community.

Presentation

Hello! This is Angie, and I’m presenting my thesis, navigating tongues for you today.

I’m not talking about this tongue.. my project is a series of experiments to navigate various options for multi sensory communication.

Context

Little TMI for the context.. I was having some issues last few years. I was working hard to keep a relationship while being far away from my family, friends, my ex bf and.. and I was thinking a lot adjusting to personal ability and preferences with communication methods.

And last year, one day, I got some handwritten letter from my friends and family, which evoked intense nostalgic feelings that
I have forgotten while using digital communication platforms.

Initial Questions

Then I started to form an initial question:

How do we communicate?
Why was the letter so special to me?
what elements are missing or gained
in digital communication compared to traditional forms?

From the question I planned on research - focusing on what communication methods are we using, which platform and language do we use, and which senses are engaged in them.
The initial research has two parts - one is understanding my language and one was understanding others around me.

Letter writing practice

To understand my ‘language’ first, I started to write and receive handwritten letters to get back of the feeling of the behavior.
Then I archived the data to see the difference of language and translation between myself, original content and digital devices.

Survey

Then to understand others, I conducted a survey to see which communication methods that others are using. The questions focus on what methods people use the most and why and why they like or dislike about them. Also I asked them about handwritten letters - to know how, when, and why they write them.

Initial Problem Statement

1. There are less opportunities to talk to students from schools they wish to study abroad.
2. There is a high chance of dropping out after they start studying abroad.
3. It’s hard to find real-time information.

Initial Problem Statement

1. There are less opportunities to talk to students from schools they wish to study abroad.
2. There is a high chance of dropping out after they start studying abroad.
3. It’s hard to find real-time information.

Initial Problem Statement

1. There are less opportunities to talk to students from schools they wish to study abroad.
2. There is a high chance of dropping out after they start studying abroad.
3. It’s hard to find real-time information.

Insights

30 lovely people responded with their own handwritings, and from the two research process I was able to get insights.
For handwritten letters - people say letter writing is more of an event, an experience. They say they like it because it engages more senses, time and effort. It’s an emotional and personal effort that you invest and uniquely create for someone special to you.

For digital communication, People use social media the most as it’s fast and direct, and easy to reach people
But a lot of people love to communicate with video calls saying that it’s more authentic, in the moment, feels most human.
It’s because it provides a whole glimpse into a persons' life with more senses, including visual movement, background visual, sound, voice and the nuance.

However, the most interesting thing that I realized was everyone has their own preferences and different reasons for those. Which includes time, effort and their own ability and preferences of utilizing ‘senses’.
From their I was starting to focus on the secondary questions - which are more geared toward translating different and unique sensory languages - the ‘tongue’.

What it means to ‘have’ Tongues?
how we communicate and interact with the ‘others’ with our preferred senses?

Project Goal

To navigate different tongues, I aimed to translate different sensory languages to make ‘options’ to utilize different “tongues” together as an integral process to cultivate a relationship and eventually ‘community’.

Personal Goal

My personal goal was to remind that accessibility is not an afterthought. For that I tried to not make any assumption or hypothesis and opened myself to learn with others, while designing the whole experience not for people, but with people.

The experiments include both digital and physical experiences and I focused on visual, auditory, tactile and verbal languages through six ‘translations’ explored with my practices.

For imagination to visual, I used Open source P5.js sketches to enable users to make 2d graphics, which is like a low-fi version of Adobe illustrator’s drawing functions.

With those p5 sketches, I conducted small workshop and user testing to translate digital drawings into a tactile graphic. I used swell form printing methods to play with different styles of outlines. It turned out the lines were less accurate in terms of delivering information, but people said they loved to ‘touch’ the visuals and it’s nice to have additional touch.

The next translation I was trying was spoken language into text and graphics. I also used p5.js for being open sourced and public. With p5.speech library, users can speak their sentences to write letter and visually interact with the words with their movements.

Amplifying the movement aspect of the translation, I moved on to the next practice which is to collaborate with my sister, who is a dancer, to explore the possibility of body language as a poetic communication.

I asked her to write me a letter and made her not to show it to me but instead dance to describe the contents. After I got the video, I tried to see what it’s happening and describe it verbally.

After translating her movement into text, I made a haptic object that vibrates with the verbal narration of the translation.

The last part, and the most integral part of my experiment is translating memory into senses. It started with translating texts into my own memory and into visuals. This project, ‘synthetic nostalgia’ is an archive of ai generated photos with prompts and memories from actual letters that I got from my friends.

Some of the letters that I received had parts of words recalling my memories.
I tried to recall those memories and I found that in the process of recall I always add or modify some details. I sometimes looked up for photos that we took for references, which also affected to my visualization of the memories.

And from that, I found that memory translation is actually pretty similar to AI image generations. Receiving a letter is similar to receive a text based prompt. and our brain works as an AI engine, that translate, recall and generate some output based on our own, or the mass database.

I also found that text prompt that are used in AI image generation is actually pretty similar to alt text in web setting as well. alternative text is a description of an image on a web page. With screen readers it helps visually impaired people understand what the image shows, helps search engine bots understand image contents, and appears on a page when the image fails to load. Also AI engines crawls alt text to train their own models, too.

And Prompt, as well, is a descriptive blurb of an image. I thought that prompt in AI image generation can be served as an alt text of the generated images and started to create some ‘plastic’ memories with the text in the letters.

The created images are archived and shown in two different forms, which is installation and books.

They could also be in more accessible form, as well!
They are shared through a tactile graphic book I made with qr codes that are linked to a web archive. The prompts that are used to generate these images also function as alt text.

By juxtaposing the prompt, alt text with tactile images and digital archive, I was able to deliver the ‘memories’ with different sensory options. you might touch and scan the book. you might just see the visual and verbal languages, but people who use screen readers might ‘hear’ the languages.

In that way I was able to present memories not just visually or textually,
but in a format that encompasses sight, sound, and touch,
embracing the wider spectrum of human senses.
And Prompt, as well, is a descriptive blurb of an image. I thought that prompt in AI image generation can be served as an alt text of the generated images and started to create some ‘plastic’ memories with the text in the letters.

And thankfully I got a chance to share the process and final projects to people with different abilities. I remember an amazing person with lower vision commented that the touch object feels like a flower standing and moving alone with high energy and potentials. I was really surprised as it’s what I was internally feeling with my sister’s movement and translation.

It was really a great opportunity for me realize the importance to design and create something ‘with’ others with different ability.

It was a joyful year-worth journey, and there’s one takeaway that I want to share. We might want to find our thousand tongues with our own feeling, senses and experiences. With that, I hope to communicate to you more in depth, and more inclusively, as a part of community of ‘beings’.