Skip to main content

Breaking the Fourth Wall: From Virtual Stages to AI-Driven Narratives

Join Contend CEO Steven Amato as he unveils the future of storytelling in a 10-minute journey that transcends traditional theater. From breaking down physical barriers with SMARTStage to harnessing AI for content creation, this presentation explores how technology is reshaping the narrative landscape, making storytelling more immersive, accessible, and sustainable.

Released on 12/07/2023

Transcript

Thank you for coming,

I'm here to talk about the future.

But my favorite way to talk about the future

is to go 2,600 years into the past,

specifically around one disruptive communication platform

that was in startup mode.

It was a startup in ancient Greece,

unlike any other startup mode.

It was the theater.

And we like to look at the theater and say, Okay,

what was going right at the theater

and what were they doing that we can replicate today?

And there's so many things that they were doing.

Specifically, it was like the open AI of its time,

because people were learning a lot in this location

and when I think about specifically the theater,

how it broke the fourth wall,

how they built this physical structure

to maximize human communication, amplifying sound

and being able to kind of imbue thoughts,

feelings, and actions.

I get really excited,

because I see a lot of parallels between technology today

and how we're using it to transport storytelling forward.

But one of the things

that theater did really, really well,

was it persuaded humanity to commune.

And that community, the Greeks realized really quickly,

was that, Hey, we can use this

to get people to do things,

kind of influence thoughts and actions.

And I do wanna start, before we get too deeply into this,

to talk about humans, specifically the audience

and how today, how we are actually doing the same thing,

using predictive audience intelligence

and I wanna talk about one story we have,

from a company that everyone knows and maybe loves, Netflix.

Yeah, so we were hired by Netflix to launch a show.

They didn't have a lot of information,

it was early in their kind of like originals, storytelling.

And hey came to us before they had a show produced,

companies contend, they were like, Hey, we got a show,

we just hired our lead, a big actress.

We think this is gonna be a big show.

We had a super comprehensive brief.

It was this, the actress they hired was Winona Ryder,

this is all they gave us.

The Goonies, and 1980s.

Now, luckily for us, we had just built

an artificial intelligence,

audience intelligence tool, called Contender.

And we put that, those exact phrases,

1980s, Winona Ryder and, 'The Goonies,

into our Contender tool

and out popped a massive affinity cluster

around horror movies.

I did not know that that was gonna come out of the thing,

but horror movies was this thing

that brought everything together.

And when we did one tick down,

do you guys realize why horror movies popped out of that,

in terms of audience intelligence?

There's one author in particular, anyone?

Who do you think that author was?

[Audience Member 1] Well I thought

she was in a vampire movie?

No, no, not a vampire.

Anyone, anyone?

It Stephen King, was the number one influence

of that cluster.

And from that, we took all of Stephen King's books,

we put them on our conference room table,

we looked at how Stephen King communicates

and we created the, Stranger Things, logo.

Pretty simple, right?

When you think about using audience intelligence and data

to kind of predict?

Now I've never seen a show title

kind of break through and have a connection,

much like the show and the Duffer Brothers are brilliant,

the show's amazing, but the logo on its own

had a very, very, very specific role to play in our psyche.

People were making their own logos,

people were creating shirts with their names on it,

some really interesting stuff.

The other thing that popped out,

in terms of an affinity circle,

was around hand-drawn movie posters.

I'm not sure how many people remember Drew Struzan?

[Audience Member 2] Yeah.

Do you guys remember any of this?

This, back in the day?

It was not popular in 2016,

but this audience intelligence said,

Hand-drawn movie posters, man!

So we launched with hand drawn movie posters.

Do you remember?

Do you remember this?

That's why, is because of audience intelligence.

Yeah, so what I'm here also to talk about,

this is one way to get to an idea using technology,

but the thing we're really here to talk about,

is once you have an idea, the disruption around production.

So like the ancient Greeks,

we are focused heavily on maximizing the value

and the opportunity around an actual physical stage.

In our case, it's a virtual stage

and we're using, how can we use this virtual technology

to create a digital twin, not just of ourselves,

not just of the sounds, not just of...

but entire living, breathing,

immersive digital twin worlds at scale.

So we have a video to show, gives you an idea of what it is.

[wind whirring]

[upbeat music]

We have a lot of different methods

for building digital worlds.

Everything for us,

revolves around a very precise digital twin.

Our system makes it possible for our hosts to coexist

in a very intuitive way with digital objects.

When the hosts have something practical to interact with,

it results in a much more believable scene.

Our system uses a 3D scan of an empty stage

and compares that with a scan of an occupied stage

and this allows us to pull off much finer details,

like beams of light or reflective objects

or transparent objects

and it also allows us to preserve very important details

like contact shadows.

If you don't have contact shadows,

you will never believe that you are anchored

into that scene, it's just gonna look like bad green screen.

Another very handy feature of smart stage

is the ability to rebrand sets with a click of a button.

We have designed these sets

to be able to be easily manipulated, with content wells,

accent lighting, materials, all easily accessible.

We build a lighting system

that takes the lighting information from a virtual world

and uses that to power the lights on a physical set.

This is an incredible time save,

but it also accurately harvests lights dynamically

from the scene.

There's no more guesswork about,

Is the light coming from the right direction?

Are my shadows going the right direction?

Are they the right color?

This is all dynamically generated from the scene

and it's perfectly accurate.

Another cool aspect of this technology

is the ability to bring in virtual characters very easily.

So traditionally, in order to drive a human character,

you would need an actor to suit up in a motion capture suit,

you'd have to build a performance capture volume.

We've been working on a system that allows you to do that

as a marker-less tracking system.

So the ability

to have a character react to your natural motion,

without any sort of suit is really, really incredible

and this is gonna allow people

to bring in computer generated humans or characters

very, very easily,

without any additional post-production necessary.

And if you want to do a little...

[Bill exclaiming]

If you wanna do some of that?

That's called acting.

That's my partner, Bill Wadsworth, he's incredible.

And yeah, so now imagine having access

to something like this,

in terms of no heavy lift for a production,

no worries about weather,

no worries about locking in that location.

We just build the location,

no worries about golden hour,

we can shoot golden hour for 72 days straight

if we wanted.

It really awards content creators, producers,

basically a new medium around freedom

and this new way to express

and we're really excited about the potential with that.

But I'm talking a lot about entertainment

and really a lot of our business

is not related to entertainment.

This smart stage technology has got massive application

around stuff like healthcare.

Imagine having an empathy layer between a doctor

and a patient being able to conjure your heart

and it's like, Oh, this is your aorta here.

Just deepening the connection

between receiving the news that really matters to people.

And also around education,

just imagine a student and a teacher

being able to have access to tools like this

and this stuff is not that expensive to actually make.

And the other place

that we're really focused on right now is retail

and we have a little example of that as well.

As excited as we are to put people inside of spaceships

or on the surface of planets,

we're equally as excited about using this technology

in a retail context.

So this entire Best Buy was captured using GoPros

to create 150,000 square foot digital asset

that we can now shoot inside of.

If I wanted to swap out the branding on this end cap,

with a click of a button, I can go from Samsung to Meta.

You can now do these changes,

have them propagate across the entire environment,

all with a click of a button.

Because these are digital worlds,

we have a unique opportunity

to engage with the viewer in new ways.

So we've developed some parallel technology

that allows the viewer to click on anything

they see on screen and trigger a secondary event.

If I as the viewer wanna click

on any or all of these titles,

I can bring up a trailer, tickets, branch,

a narrative, mint an NFT,

then as the publisher of that content,

I get all of those behavioral metrics

that lets me better understand

what the viewer found most compelling.

So obviously we're excited about the application,

related to buying and selling and dropping,

but we also really got our eye on the future of celebrity

and we are really excited to showcase something

that we have never shown anyone in public before.

And we are really focused on identity

and what that means, related to our face.

So imagine, if you will,

the ability for you to create a photorealistic version

of yourself, but this self can actually come to life

and speak in every language simultaneously.

So I'm happy to introduce, FACELyft.

[Presenter] Picture this,

stunt doubles seamlessly replaced

with actors' faces in camera.

Our breakthrough AI technology

can digitally apply makeup in real time,

saving hours in the makeup chair.

By aggregating these efficiencies,

we're not just saving time,

we're revolutionizing cost effectiveness in production.

We are excited about integrating this

with a robust layer one blockchain.

This ensures that you have full ownership and control

over your digital likeness,

much like your social security number.

It's the convenience of a digital twin,

safeguarded with the utmost security

for your sensitive information.

Imagine a world where technology streamlines

the most time consuming aspects of film production.

We know this is a hot button topic

and we think it's gonna be incredibly disruptive,

around makeup, a revolution around makeup,

cost-effective in terms of special effects.

We definitely believe

that we think actors are gonna be very interested

in the ability to expand their role playing.

So if I'm a younger actor, being able to age up

and if I'm an older actor,

being able to play down, same body.

We think there's a lot of applications here

and I only have 10 minutes

and this is definitely a subject

that is ripe with information.

So I will go on to the final distribution,

is where we're seeing the next disruption,

is this idea of being able to add a virtual layer

for the first time ever really,

a virtual layer in the physical world.

And this summer, we worked with Warner Bros. Studio

to launch their number one blockbuster movie.

Did anybody see, The Flash?

No... [audience laughing]

It's not my fault, I didn't make the movie.

But, to be honest, to their credit,

they had a major blockbuster movie

and they launched it traditionally and they said, Hey,

let's go and put it into cinemas and see how it does.

But they also said,

What if we launched it into a 4K NFT in a digital wallet,

full movie, ultra high definition,

playable on every device?

I can own the movie, but I not only own the movie,

I own the content, AR experiences all around that.

This experience for, what they call the movie-verse one,

won the IBC Best in show in September.

So the model of delivering a movie virtually into a wallet

is going to be a future thing.

It's just, we're looking forward to that.

And finally, my last thing I wanna talk about,

is something I'm really, really excited to almost announce

and it's a partnership with a major sporting league

about the future of fandom

and how we're gonna be able to kind of pepper in,

basically bringing, I'm a huge fan of sports

and what we're working with this league on

is how do we connect fans with the teams

in ways that we haven't thought about.

So a lot of times, a lot of teams have access to their fans

only through the stadium,

but there's fans all over the world.

With our technology, our geo-technology

where we're able to kind of unlock AR portals,

we're gonna be able to create stories

that people can step inside the stadium,

from wherever they are

and that's something that's gonna come in January.

So we're very, very excited

about the potential of 4K portals

that are clickable, shop-able, explore-able and happening.

This is a test, this is not 4K, but you get the idea.

Being able to put live action 4K streaming

inside of an AR portal.

[Audience Member 3] They had one of those in Europe,

where people from two different cities can do it.

Yeah, exactly.

She said that, They have one in Europe,

where people from two different cities can do it. Yes.

And we see this as a very, very big part of the future.

So I just wanna say the future is here.

Thank you WIRED.

I'm looking forward to each of you guys,

what you guys are gonna be doing with tools like this

and happy 30th and I appreciate your time.

Thank you so much. [audience applauding]