Show Notes

What is synthetic media you ask?

Well it’s been in the headlines as deepfakes, videos of famous influential figures that appear to be saying controversial statements raising concerns over what’s real and what’s not. While these fake videos are scary, there is a company that’s putting a positive spin to this technology.

Online learning, movie production, and media creation are some of the most complex  tasks that businesses have to maneuver the logistics for. From translating training videos to last minute changes in scripts for big screen productions, media creation has thousands of moving parts to orchestrate.

In this episode we’re joined by Victor Riparbelli, co-founder and CEO of Synthesia, an AI-driven visual synthesis that’s bringing amazing new capabilities to content creators. Tune in to learn how exactly this technology works and how media creation will never be the same again.

Transcript

David Beckham [00:00:00] Malaria isn't just any disease, it's the deadliest disease that's ever been so said, must get access to and would it be for myself? You for the medical staff was a possibility. I mean, Tariq is a woman. She knows she's going to. 


Adrian Grobelny [00:00:29] That was David Beckham speaking nine languages artificially as a campaign to raise awareness for malaria, artificial intelligence was used to make it appear as though Beckham could speak all of these languages by singing his lips and facial movements. We live in a time where A.I. is everywhere. It's taking over our jobs, driving our cars for us, and now has the ability to create synthetic media of ourselves. What is synthetic media, you ask? Well, it's been in the headlines as we fix videos of famous influential figures that appear to be saying controversial statements, which raises concerns over what's real and what's not. While these fake videos are scary. There is a company that's putting a positive spin to this technology. Online learning, movie production and media creation are some of the most complex tasks that businesses have to maneuver the logistics for, from translating training videos to last minute changes and scripts for big screen productions. Media creation has thousands of moving parts that must be orchestrated beautifully. Today on Things Have Changed, we're joined by Victor Republi co-founder and CEO of Synthia and A.I. driven visual synthesis that brings amazing new capabilities to content creators. Tune in to learn how exactly this technology works and how media creation will never be the same again. 


Jed Tabernero [00:02:13] Welcome to THC, where we unpack the ever changing technology economy 


Adrian Grobelny [00:02:19] hangout with Jed, Shikher, and Adrian as we tackle the industries of tomorrow. 


Shikher Bhandary [00:02:24] This is things have changed 


Victor Riparbelli [00:02:31] my name is Victor, and I'm the CEO of the co-founders of the media, and at some future we are tackling, I would say, kind of a subset of the mantra in Silicon Valley that software is eating the world. What what building is software that feeds the camera? Essentially, what we're doing is returning media production, particularly video, into something that you can program with code and computers rather than something that you have to record with cameras and microphones. That's the big idea that we're building our company around. If you take traditional media and video production today, you would have someone that writes the script. You have a camera, a studio, an actor, the same post-production, and then you have a piece of content that's ready. That process is extremely long. It requires lots of different people with lots of different skill sets. And that means that that video doesn't scale. And the issue with that is that we're in a world now where video is already everywhere. Video is taken over the Internet. If you look at the young generation, what is everything is video. I have two younger brothers and they'll face time instead of calling me because there's video in it. We have to talk world's fastest growing social network, which is more or less video only. There's almost no text and interface. So we're moving into this world where the video economy is going to dominate everything. And I think it feels like videos, everything right now. But I think we've only scratched the surface of just how much our digital communication is going to be video driven and over the next 10 years. And that's the problem we're coming in to solve, because if that's going to happen, we need a more scalable way of producing video content. We've been on a journey, you know, with with with cameras in particular that have lasted hundreds of years. So if you plot the kind of very rough timelines, you have analog cameras that were invented somewhere around the beginning of the nineteen hundred. And we had digital cameras which came out around 1990 and made it into digital process to create video that made it much easier. We then had smartphones, which is kind of what we had today. We all have smartphones everywhere in our pockets and they all contain cameras in general. Cameras have now become incredibly cheap, incredibly small, and they're everywhere. Right. But there's still cameras. They're still tied to a physical process. And as long as cameras is tied to a physical process, it's not going to scale like we know software can scale. And that's really is the problem. But we are coming in. So today what we're doing is we're helping content creators in online learning, for example, a marketing to create a talking head style video really fast and easy. Go to a platform. You select one of our presenters or you can upload yourself as well as if you just type in your script and we'll output a talking head style video of that person delivering that message to the camera because of the kind of details of, you know, why is that so valuable? It's really quite simple. If you watch a video on average, you'll remember something like 80 percent of the content in that video. It read something in text. You'll remember about nine percent of it. So everybody wants to create video. But the problem with video is just that it is so difficult and expensive to create that it's just not feasible. But with these types of technologies, we can actually start to turn text into video. That's the way most of our clients are using our technology less as a replacement for video production and more as an alternative to text. What we're seeing is that our creators are taking things which have been a PDF document on email like that, for example, and then they're creating video content instead. And that is just proving to be a much better way to communicating. No matter if you are a Fortune 100 company that we work with or if you are an individual creator sitting somewhere in India, Brazil, for example, creating online learning content for your students. 


Adrian Grobelny [00:06:17] Awesome. Well, is that video really you or is it an avatar


Jed Tabernero [00:06:24] after watching all those videos on the same network? I'm convinced that I don't have the skills yet. I'm not trained to understand what is an avatar and what it's not. Because after seeing that, I was like, damn, we are on a different stage of technology. 


Victor Riparbelli [00:06:39] Yeah, yeah. No, this is this is my my real image. But I get that a lot. And I think we're not far away from actually being able to have like a static image in our video calls. And so that is certainly something that's that's not that far away. But but for now, I think these technologies that goes far for both video bytes and text generation, for that matter, they work well. They are now ready for production, but in very constrained environments. So if you played around with our platform and you tested a website to also see that there is limitations as to what the videos can do, if you just like looking at me right now, like, you know, my head is swaying. I'm kind of weird angles, strange light on my face. Obvious things are still difficult for for the computer to emulate. But but that's obviously something that's going to be. So far in the future. 


Shikher Bhandary [00:07:27] Yeah, yesterday we have, like get together before the call just to try and see if everyone's come prepared, right. Jed was so into it that he created an account and he was like giving us a walkthrough. And we were like, Jesus, when you put the number of languages that the three of us know, it's like I think eight or 10. So we are like, oh, you know what? We don't really need an app or software to make us talk in different languages so we can do it ourselves. And then we realized how seamless this was and we were like, OK, hang on, guys. In your iPod, guys, check this out. Our podcast has got to be just three avatars talking with each other at us, just typing the text. 


Victor Riparbelli [00:08:11] Yeah, I mean, I don't think that's completely unrealistic. There's already companies that can that can close your base and also transform it into different languages. You can have your own by speaking in Italian or French or Chinese. It's definitely coming 


Shikher Bhandary [00:08:26] into that sector. So maybe let's let's just start with, you know, the whole process of ideation. Were you super into foreign language films that you were like, you know what, I don't understand this. I can clearly tell this guy is speaking a different language and I need to create something that does it right. 


Jed Tabernero [00:08:45] How many dubbed movies did you watch? And just be like, damn, I hate this. 


Victor Riparbelli [00:08:52] Well, it's an interesting question. So I am from Denmark. I grew up in Copenhagen, which is for those who don't know a very small country up in the Nordics, about five million people. And it's interesting how the economics of that just impact the way you live your life and the way of consuming media. So in Denmark, we're too small a market for anyone to even top films. Kids TV gets stuff, but everything else is subtitles only. If you go to a market like Germany, for example, they drop everything because the economics work like it's worth investing enough money into a movie and doubling it to get that little bit of extra engagement. So I've been watching a lot of kids TV appetizer and stuff like that, which is picked up and I wouldn't say that. It's like, no, I just like I have to solve this problem. I think it was part of a much more high level strategy around how do we really bring this type of technology that considered as video to market. This was one step off it where there was a very immediate and real problem that we could solve within the constraints of the technology at the time so that we like different chapters of building something easier. And, you know, we are on a on a very long journey. And the way I usually sort phrase this is going from, you know, our first big thing we became famous for was David Beckham speaking nine different languages. You might have seen that one, which is very people still kind of pretty constrained in terms of what you could do with it. But, you know, in ten years time, we want to enable a kid sitting in their bedroom somewhere to create a Hollywood film or just their laptop without anything else. 


Adrian Grobelny [00:10:26] Can you go over the different products that you guys are focused on, and why do you feel that those products are the most important to focus on in your early stages and have the most impact and synthesizing video 


Shikher Bhandary [00:10:40] like you have the creation, then you have the API integration and all that stuff. So I would love to understand the breakdown 


Victor Riparbelli [00:10:48] where we want to go in a 10 year window or something like that is we want to be able to generate any video that you today can see on YouTube or in the cinema or whatever from just a laptop. Right. We want to be able to someone to code up a video kind of like that. So the product that we have today is an online software service product where you can sign up for 30 dollar login. You get the selection of actors that you can use to create creative video content. And what the platform gives you access to is really just kind of creating videos as a desktop rather than something you need to have a camera in a studio and post-production to do so. The target that we have right now is mostly clients that just want to turn all that digital communication into into video. And that is a lot of people who sit and create learning and development content, for example, sales enablement content everywhere with there is information that you have to give to other people and it's just not a great medium for 


Adrian Grobelny [00:11:43] training is the most expensive onboarding cost for bringing on new employees? You know, I've I've gone through three or four different companies in the last two years. And so I've said a lot of companies a lot of money because I'm doing a lot of training and trying to learn all the new things. And, you know, they have the modules, they have people going over materials, just learning all the company wide standards. And a lot of times that's an actual person that's there and they're, you know, talking about a topic so that you can actually pay attention versus just reading those five pages of PDF, which everyone's going to glaze over and not really dig into the details of that. So I can see that really being a big impact and helping companies, because I can't imagine how expensive it is to hire these actors, create these scripts that recording it. If you have a bank in a different country, changing those languages into that language, in that area, having new actors, potentially, it's just logistically it's very difficult. And the costs that come with it are very enormous. 


Victor Riparbelli [00:12:47] Exactly. And I would also just just add to that, I think the cost this is certainly an element, but also just the complexity. It's kind of interesting because I think there is this general idea that, you know, today it's really easy to make video content. We all have smartphones and, you know, you just record yourself, you put it on LinkedIn or urogenital system, whatever you're doing. But anyone that has actually tried to create a corporate video will know how much pain there actually is in creating a corporate video. Like there's a reason that shooting video is a craft that people go to school many years to do and then want to shut the content, assuming that wherever you are actually recording can kind of create coherent takes a record coherent takes, which is so much harder than it sounds, then you have the process of bringing that into a video editing program, which, depending on your level of of proficiency, will be more or less advanced. That takes time. You have to transfer the files from your camera or your iPhone. You have to sit and edit it and you have to render it out. You have to offer somewhere. The process is just so long and once you've recorded it, you cannot change it. And that's actually what we're hearing the most in the market from from our customers. It's the fact that if you caught something once you've recorded, it's done. You can't change it. I mean, you have to redo the whole process. If something changes in your company or all of a sudden your headquarters say we don't like the way you're phrasing this particular thing. And that's because it's a linear medium today video. Right. So the ability to use it in the way you can do it this year, where you just jump back and forth, edit the script a little bit, render it out, see what it looks like, this kind of workflow, which we're used to when we work with with text, for example. Right. It's just so much more efficient. 


Jed Tabernero [00:14:29] Agreed, and I mean, just to tie out that use case, right? I work at a company that employs over 700000 people, and the stats say that if we're, for example, one person who's making fifty thousand dollars a year, if you lose that person in that year, it's going to cost you at least 10 percent of that person of what you were paying that person to do that. Right. And the higher you go on the ladder, the more expensive that is. And guess what? The number one reason for immediate employee turnover is a bad onboarding experience. So, I mean, I'm not trying to harp on the onboarding here, but I'm just telling you, in scale with a company that employs eight hundred thousand people that could have a lot of turnover and churn. This is the most efficient way to do it since we change our operations at a large scale. Right every second. 


Victor Riparbelli [00:15:20] And I think you just I mean, obviously, let's be honest, right. No one thinks compliant causes is incredibly fun to do, but they are very important. And you also look at the scale you're talking here and this type of technology we're developing gets really interesting. I mentioned the API in passing before, which is which is something we're launching big time for in the coming months. And we've run pilot projects already. The interesting thing here is that if you go to our platform today, it's what we think of as like linear video production, because a human being that sits down in front of the computer, you type out your video. It's kind of like creating a PowerPoint presentation in terms of level of simplicity. Once you're done with your video, you render it out and you use it for what you need to use it for, because with our technology video is now software code. That means we get all the benefits of working with Sol Franco. Right. So we can scale it infinite for very low cost. So what we're getting to now is these API and personalization driven use cases where if would take the example of working in an eight hundred thousand person company, there is a lot of people who are going to be watching compliant courses and training courses every year, but they're all different. They have different level of proficiency in how technical they are, how long they've been with the company, you know, what language do they prefer, which role, which department do they work in? And all these types of things that are different from all of us. And they do have the company wide metrics of like where's the company, if you recall, the compliance cost a year ago, that might be outdated already. What we can do with our new technology is that every time someone watches a course, we have some data points around them and the video will be tailored to them. We did this for a company with 50000 employees where essentially all the training content would change depending on who's watching. Right. So what role do you have to have a technical role or a commercial role? What's your proficiency like? These specific applications, so on. So people who already knew the details of how something worked would not be bored with content that is for them very, very obvious and all the way around for people who maybe have a hard time understanding something that's a little bit more technical, they would be watching videos that took a little bit down a notch in terms of how they explain it. I think that gets more interesting, right, because it also means that hopefully we can make things like popline courses less boring and much more relevant in the future. 


Shikher Bhandary [00:17:38] What's the hardest part of that video creation process like, you know, behind the scenes? Because one thing we were noticing yesterday was when they're speaking, if you didn't know this was generated, you would not realize. But then when you you're looking at specific points like the eyebrows and the eyes, the the movement of the head, you know, there are subtle things that you can be like, OK, I can see how this could be a synthetic creation. Right. So what's what are the big complexities that you guys are solving to get to a stage where it's just seamless? 


Victor Riparbelli [00:18:13] Well, so the idea of digital humans have been around for a very long time. Right. This is is is that not the first time we're seeing that we have the concept of digital humans all the way back to the 90s. You've been watching it in films for the last 20 years. And if we just kind of the way we've always created digital humans and we talk about what we call photo real, which is what we would be doing with synthesis. And Hollywood is is a great way to explain this. So you go back in time with Hollywood until today, the way you create a digital human is that you have a team of artists. They will sit down, they will look at videos of you. They'll probably capture a lot of data, you know, with cameras and sort of an array around you. And then someone will sit like a person will sit or team will sit and they will build a 3-D model of your face, for example. They will then skin and they will, you know, paint onto that. It looks like you and all of this is like a manual process that takes a very long time. And it's very, very expensive because it's humans doing this and it's just impossible to make it look one hundred percent real. Even if you go to a really good Hollywood film today, you can generally tell that it's a digital human right to some degree. And that's because humans are incredibly good at spotting just the slightest. Oddities, right? So when you watch like a Star Wars film, for example, it's hard to pinpoint exactly what it is, but there's just something about the way that a character blinks and moves of the synchronization of the lip when it's talking. There's just something that's a little bit off. And that's because it's impossible to model, you know, by hand, so to speak. Every little thing detail that a human does because there's so many parameters, if you will, what A.I. and deep learning has done is that it's changed that game. So rather than having a team of human artists sit down and try to model what the real world looks like, we can now have algorithms and systems that instead of trying to kind of, you know, emulate it directly, can learn what the real world looks like. So once you take an avatar on our platform, the way you do that is that you supply us with some some minutes of footage of you speaking. And what the system actually does is that it it kind of learns how you speak in this particular scene. That means that we can go from, you know, only having to brainpower of like 40 really, really good visual effects artists sit and try to model out what you look like when you speak and do that in a very realistic way to having an ecosystem which can have billions and billions and billions of billions of parameters in its mind and can and can emulate what the real world looks like from this training data. And that is the hardest part. The hardest part is making it look real. Right. So as you mentioned, sometimes you might think that the lip synchronization is a little bit off of the way that she moves or they move their head as a little bit off. And it's like those last one percent is is by far the hardest part because we are so good as humans at spotting these like small mistakes. 


Shikher Bhandary [00:21:12] So you mentioned you asked the person to say a few words. So what are those words? Are there like super immortal words like Coca-Cola or Cintas? You know, you want them to, like, move their lips around quite a bit. Victoria Barbella, you know, that kind of stuff. Do you want like what's the thought process there? Because it just can be like a cat or a ball. 


Victor Riparbelli [00:21:39] Now, what we actually have to do is to speak like you would like to speak in the video. So if we don't want to be overly emotive or really sad or really happy, we really just ask you to speak in a professional video like manner to the camera for a couple of minutes, because whatever you send us as the input is, is going to determine the output essentially. And for most of the use case that we have, we were speaking about our compliance videos. It's generally like this neutral information delivery style of speaking to the camera so that that that's what you deliver to us today. But as we move forward with the technology, these things like adding more emotions and expressions and hand gestures and things like that, is certainly something that's going to come into play and that might need we might require us to have no more kind of different days of you laughing and smiling and saying something that happy or sad way is actually quite funny. We've tried to experiment with this in the past of, as you mentioned, having people say like the same sentence of different emotions. And it's something that sounds easy on paper. And it's incredibly difficult to do in reality because everyone when they asked to do it, they they it's not very it's not natural at all. Right. So if I ask you to say something in the sad way, it's not how you would say something if you actually said so. This is like this in itself a really interesting problem. And that actually makes it quite hard to train these systems to accurately replicate, like happy, sad, those types of emotions, because if every time you capture that data, the person is acting and looking weird and then that's what the system is going to replicate. 


Jed Tabernero [00:23:12] Yeah, I have an idea for that. All you got to do is when you start them on the process, you got to actually scare them. You got to actually make them cry, you know, so when they start that 15 minute process, you start out with frickin zombies and shit like that, they will give you real, genuine emotion. 


Victor Riparbelli [00:23:30] I have a I have a funny story because this is this is like way, way back where we were trying to figure out a solution to this at one point. And so I Joe was a great guy. He had this idea of showing people pictures that would put them in the emotional state that were required to be in. And there are some shit like that to look depressed and sad. And I think one of the pictures was like someone lying on a train track. It was like, no way, I was kidding. And it was like one of my one of my friends who was he was like the face of something at that point. And she was like, what the fuck? I think anyone who has played around with things like keep it free, which is sort of generation our system for that matter, like there's a long way to go. Humans are incredibly. It's creatures and emulating what we do and how we think and how we would write an article or appear in a video. It's just so difficult. There's so many parameters and unknown information that a computer just cannot comprehend at this stage. It'll probably get get a lot better. Right. And I don't think that necessarily is is is the goal to emulate humans like one to one? I don't think that's ever going to happen. I think my definition of synthetic media, you're kind of mentioning it at the beginning, is probably now I've changed a little bit to be media. That's fully a partially generated by A.I., but that is trying to mimic humans in some way or form, because you could also say that someone drawing a shape on a screen is an emulation of a human drawing, a shape on a screen. But that's like pretty boring, right? Where it gets kind of like that's weird is when it's like a voice that sounds like a human or it's a video that looks like a human or it's text which sounds like it's actually written by a human being. And I think those that that area is filled with results so early in static media as it's like an industry. But it's really interesting and exciting just how much it's taken wind over the last six to 12 months and how much these systems are appearing in production. 


Jed Tabernero [00:25:37] Yeah, and and, you know, I, I think at this point we've pretty much established that it makes sense. I mean, the the use cases that we've studied, at least through the research of this company, cost savings and time savings, it's a no brainer. Right. So I think I think people just need to be more educated about what kind of use cases this can help with. Right. Because right now their idea is like, oh, defects and all this crap. You know, that's a very first inkling that we had as well, because that's all the exposure that we've had for media, if you think about it. But, you know, one of the interesting use cases that that was on the website. Right. Which I didn't really see in other places was the Snoop Dog use case. Right. Where he he had done a commercial for an Australian company, I think it's called Just Eats. And they wanted to change. I think him saying just eat to to menu lager or something like that. And that was a use case where, you know, it's not it's not the main the main business product of symposia. Right. Like, that's not what I'm seeing as as the products. But that was an interesting use case for me to see that like, oh, this is a huge problem when it comes to light or this is something that that can be solved by this type of technology. But the alternative would have been so expensive. Imagine getting Snoop Dogg to do that whole thing again. Right. That would be a lot of expenses. Right. Just that one huge case, I think is is super interesting. So just quick question for you before we jump into that is, is this a common occurrence in the industry? Because like I said, like, I didn't see any advertisements for this other type of use case for your products. So it must not be that common or is it just not getting engaged? 


Victor Riparbelli [00:27:29] Notice it is getting more and more common. It's a product that's not as front page on our website. Essentially, you know, we've been we've been going for for four years now. And the first iteration of our product was video editing. So that's what this and David Beckham, I think I've fallen under that is essentially taking an existing video clip and changing the speech and parts of it. So, you know, you've seen the Beckham, the Snoop Dogg work with lots of advertising agencies where they will shoot one appetizer in English and then they'll translate it into all the languages in continental Europe, for example. Or you need to change something. You want to you want to change the slogan or a particular line appetizer. And you can use our technology to do that. That is something that's out there. And in production, we have several agencies that have direct access to the technology and are using it for that. While we've moved into now is not editing video, but actually generating video from scratch. And that's where can really scale because editing video is is fantastic use case, but it's difficult to scale because there's a high degree of complexity. Every single new video clip that you're taking is essentially a different problem for the children. So while you can you can ultimately you can scale it in certain ways. It's it's difficult to. To drive a kind of a venture backed technology company on on that technology alone. So that was always just a stepping stone that we needed from a tech perspective and also to build a brand and get customers to get to the point where we can generate video from scratch, which is what we're doing today. 


Jed Tabernero [00:29:07] I'm imagining my kids growing up in a world where, like, you know, with all this generated content. Right. They'll never know that Snoop Dogg was ever American. If Snoop Dogg was speaking Tagalog or my my home language the entire time, they're exposed to Snoop Dogg. Right. So it's just an interesting use case for me 


Shikher Bhandary [00:29:24] to be sick. Firstly, the 


Adrian Grobelny [00:29:28] languages 


Jed Tabernero [00:29:29] like my whole life, Snoop Dogg has spoken. It's like, oh, dude, I don't know what the hell you're talking about. He's not American. 


Victor Riparbelli [00:29:37] No, that's going to happen. And I think that's very interesting and how we can we could break down the barriers of culture. As as I mentioned, you know, I'm from Denmark and it's interesting how the dynamics of the media world is is obviously very dependent on like volume and viewership. So you take a Danish film, the audience for that Danish film, it's just inherited limited because if it's a Danish, there's five million people who can watch it natively. If you go to the US, people do not like watching subtitled films. So that market is very, very niche. And if you have these technologies were at the point, you know, where you could just translate a whole film, it's very interesting because potentially, you know, a Danish film can compete with something that's produced in America because we'll have the same appeal to an American English speaking audience. So I think it's so interesting with these technologies. Right, because we're kind of right now, you know, we're trying to imagine the first order effects what this will mean. But on a longer time scale, I think what's much more interesting is the second and third order effects of these types of technologies, maybe that maybe the synthetic media is a technology that actually makes it possible for the rest of the world to compete with Hollywood in a meaningful way. And what does that mean for the entertainment business and how we think of talent and rights and IP and all these things? And I think I don't have the answers. I think we have a good idea of where we're going. But I think this is what gets me up every morning. This is what excites me is not so much necessarily what can we do today? It's I think this is going to be a paradigm shift in how we produce media. And right now it's very focused on in the media, you know, especially four years ago, like the whole deep thing and what's going to happen, although that's fine. You know, that's how every new technology that's powerful comes to market. But if you start to think this out, it's got impact and disrupt the media business in a way I don't think we can imagine yet. 


Adrian Grobelny [00:31:23] So you you mentioned the film industry and how it's really going to revolutionize and change the way they create these movies. I mean, we've had such CGI heavy based movies that have costed in the billions like Avatar, I think was one of the most expensive and CGI focused movie that they couldn't even develop it when they came out. The storyline, they had to wait for the technology to catch up and eventually have the technology to create this whole world. So I wanted to kind of get your sense on how is the film industry reacting to this? Are they're very open and accepting of it, or are they seeing this as a threat where we can now create films behind our desktops and laptops and really be able to compete with these big budget and big studio production companies? 


Victor Riparbelli [00:32:15] Well, I think we're we're so far away from that scenario still that I think it's kind of incomprehensible for most people. We're far away from being serious competition to to Hollywood. That's probably 10 years of the future. I think we're still at the point with Hollywood and traditional media industry where they see this as something cool and fun and something to get them some PR. So, you know, there's this movie, The Irish Man that came out, I think it was last year that Robert DeNiro and I was like, oh, that'd be like deep faith him. And it turns out that did not keep him at all. And they use traditional visual effects because that's what they've always done. But they spotted as a bit fake to gather some, you know, some interesting reactions. And then there was a fake guy on YouTube who actually used fake technology and it looked like twice as good as what the studio had spent, like millions of dollars doing. Right. And so I think that I think that it's still far away from these guys. They're not thinking about this as a threat right now. I'm sure that people, artists and visual effects houses are interested in how this can help that pipeline and, you know, make that job easier. But I think what's going to happen here is, is it's going to be it's not going to be Hollywood adopting this stuff. It's going to be a grassroots movement of, you know, random guy in Denmark creates this fully CGI film from his laptop. And it doesn't look nearly as good as a Hollywood film, but there's an audience from it. And from there on grow and grow and grow. And I think if you look at most new types of media, it's it's never like the linear thing that happens, that that's the. You think Hollywood will be excited about being able to create films much easier, but we know that there's such a big cultural barrier to accepting new workflows, new technologies. I always think of us think of something like Twitch. I would have thought that when everyone had access to broadcasting technology, we would be watching other people play computer games. This has nothing to do with the traditional industry. And if you look at music, I have you know, I'm a hobbyist music producer. And if you look at the effect of having Ableton Live and synthesizers, essentially software that can make music and the effect of that was not that all the big authors that you listen on on on top hundred Billboard made more music, all make better music, probably like to a degree. What was really interesting was we got SoundCloud, we got the idea of SoundCloud, rappers, people, people coming from nowhere and making music. That sounds really good. You have, you know, Swedish artists becoming household names. You have this whole kind of movement of people that were outside the traditional industry and they could compete. The digital industry didn't really want to talk to them, but eventually they had to because there was a big audience for this type of stuff. And I think we're going to see the same thing here. I don't think we're going to see, you know, the death of Hollywood. I think we're still going to be producing massive Hollywood blockbusters for a very, very, very long time. But there's going to be this other genre of entertainment which is generating these technologies. I think that's how we're going to see this play out 


Shikher Bhandary [00:35:14] to touch on that a bit like just thinking about streaming. I mean, who are the last people to get on board of streaming? It was the Hollywood industry here. Miramax came out like two months back, right after seven years of just total domination by Netflix. Right. So I think they see it more as a threat than one to maybe partner up. It's going to impact our IP. It's going to impact our revenues. And you can probably Buckett this in the same way as technology, kind of maybe we don't need to stretch your shoot for five days where you're trying to get one scene right. Maybe there's a way we could just have this technology to finish that one scene. Right. But how much backlash? Other actors are going to be like, hey, wait, hang on, I didn't see it that way. And I emote differently. So things like that. It's a gray area where the actors are your IP, at least for the movie industry. So they need to keep them happy. So, yeah, it's so interesting how this would play. 


Victor Riparbelli [00:36:17] That's already that's already happening. So if you take something like tarping, for example, it's that many, many actors today who don't want their movie stopped because it's not them. They don't like the way the dubbing actors sound. There's no it has to be this particular guy that there is. Funnily enough, there's a guy in in China, for example, who is I forgot which actor is a really big actor. And he is the guy that always has the voice of this actor because this well-known Hollywood actor only wants this guy to do his voice, which means that the guy with this guy's voice in China is now a multi multimillionaire because he can demand Mollas any price for his work because they don't want him. And I think it you started to dismantle all the incentives. There isn't the Hollywood industry today for accepting something like this technology? I just think it's going to be so far out. It's not going to be the first one to adopt it. They'll probably adopt smaller things in it, like a particular tool to, you know, easily make a 3D scene or something like that. But the idea of. Generating film and content entirely without cameras and actors, the studios and I don't see that being Hollywood. I think I think it's going to be it's going to be just like a grassroots movement. And probably the first content that comes out of this kind, people are going to be like, that's really weird. I don't like that. I think the uncanny valley. But there's going to be people who like it and that that amount of people is going to become bigger and bigger and bigger, just like we're seeing with something like Twitch, for example, sort of released when it started. Now, it's quite normal that people I mean, even in Denmark, we have TV stations that stream counterstrike, for example. That would be unheard of 10 years ago. But I think I mean, to that point, right. I think this disruption that this stuff is going to bring is and it's very interesting and I'm a bit like a student of technology history, and I find it deeply fascinating, especially developing a technology that we're doing. And if you look back in time, every time there's new powerful technology that comes out, we see the same thing play out. When I was a kid, for example, Napster and Kassahun Livewire and all these file sharing service. 


Shikher Bhandary [00:38:22] And while that's 


Victor Riparbelli [00:38:23] not a long time ago. Yeah, yeah. That was a fun time. I remember back then I was like, that was when you didn't have access to anything at the click of a button. But there's something magical about that. I think actually 


Shikher Bhandary [00:38:33] I downloaded 50 Cent in the club online and in my mind that I could do something like this, really. It was so revolutionary, expanding a bit from the entertainment industry. So we spoke about software as a service or more to do with business businesses training. We spoke about the applications with regards to movies, commercials, what else you guys kind of looking at. We saw a really interesting use case, the video with the Weather Channel. Now, that's super cool because you can just stick data straight from Iron or the Weather Channel and integrate it into a video that's all automatically curated to provide you weather forecasts in your specific region, say, really small town in Denmark. Right. It's not going to make national news, but it's something that's still very important that 


Victor Riparbelli [00:39:36] that's what it comes back to with this technology and how we see it being used. Right. This is not a replacement for traditional video production. That's not how I don't think that's how it's going to play out. It's not what we're seeing. What we're seeing is that with these technologies, we can now turn a much larger share of the world's information into video content. So if you take the Weather Channel example, there's today there is every day or whatever, like five, six p.m. there's a weather forecast. Whatever area that you live in, it's all great. But everything that's kind of packed into that weather forecast is is limited because you can't record a million different weather forecasts for people who live in different cities and speak different languages and stuff like that. What we're going to be seeing with these technologies that they are going to be used to create video content for the long tail. It's not going to be the 6pm news. It's going to be, you know, let's say I live in London. Right. And there are lots of people in London who are very interested in secondary cricket in India, for example. Let's take that as an example. There's by no means resources of budget to create video based content around. Secondly, Cricket India, that's just not going to happen today. That's probably some table that you could go on and watch the BBC like who's got which goes on. It's purely tech today. What these technologies allow is let's take all that data and then let's create, you know, sports news about, secondly, cricket in India in 50 different languages every time a new match has been played. And we can do that just continually over and over again because it's cold, right? It's software. It's scales. Yeah. And that's where we've got to see this stuff implemented the most. 


Jed Tabernero [00:41:09] You know, as far as we're talking about all of the applications, you know, typically this topic and especially like the word that we've kind of lightly mentioned across the call is is, you know, deep fakes and and other ways that the public currently perceives this type of technology. Right. And there are limitations of their understanding of what the positive things are that this can just assume. Right. When I look at synthesizers website, I go I go to the ABOUT page and I try to learn about the company. That's what we typically do here. Things have changed. And the first thing that I noticed right is ethics. Ethics comes up as as number one in your about page. Right. That that gives me some comfort that that looking into this type of technology, I don't immediately think, oh, my gosh, everything is going to turn into this. I chat, but that's going to ruin our lives or whatever. And this give me some kind of comfort that you understand that the public's perception is is kind of misplaced at this point in your point of view. Right. So and you mentioned like people first, you know, like you want to make sure that the. Technology is driven by humans and what humans need it to do, right, and in Silicon Valley in general, the entire tech industry is calling, getting called into question by governments all over the world where taxes are coming up. New regulation is coming up because of that, because of the issues that we've had so far. So just one question to start this conversation off is that, you know, how do you ensure that beyond all writing this, this ethical things on the website and educating people about what the ethics there should be like, how do you ensure that people are comfortable participating in this kind of technology? 


Victor Riparbelli [00:42:51] Like, first of all, we have a few ground rules. I never, ever said to anyone without consent that that sounds kind of obvious, but it actually really isn't like we have a lot of competitors who have synthesized Obama or Trump or something like that not to be malicious, but just to kind of show people this technology exists and we cannot do this stuff. I'm not going to be the moral of chair, but for us, it's just been a thing for the beginning. We do not use our technology or anyone that is not giving consent, period. And in terms of moving forward for us, what most people are sort of afraid of is, you know, these videos of politicians, whatever, saying things that they didn't say in our product today, al-Sadr's platform, it's a relatively easy thing to overcome. It's a pretty constrained environment. You need to talk to a salesperson, repeatedly company to upload an avatar. And we require proof that it's actually you and scripted to ride out and things like that. So we have technical checks and balances in place to make sure that you can create content with someone that has not given their consent. And as we scale up the company, we're going to be implementing essentially KYC style checks. So, you know, when you sign up for bank account, for example, you have to do the thing when you hold the picture or take a picture of yourself with your passport and things like that. That's that's the kind of things we can do when you create a new avatar to ensure that the actor is you. But the more interesting ethical question for us moving forward is the same thing that lots of platforms struggled with. Video is a really effective means of communication for good, but also for bad. Right. What type of content do we want to allow people to create? I think that's going to be the real ethical question, identity stuff we can definitely, like, build our way out of. But no, if someone comes in and wants to create KUNR content on our platform, should we not allow that? Should we allow that? Where do we draw the line? Obviously, hate speech or things like that you can create on the platform. It would not allow that. But where do we draw that line? That's an incredibly difficult question that I don't think anyone has an answer for such a 


Shikher Bhandary [00:44:50] discussion, especially during these times. It's like, are you censoring by not having them? But it's hate. It's not really free speech, you know, it's just such a hard it's crazy difficult. 


Victor Riparbelli [00:45:01] And we've had no political parties us. So people always mentioned this. Like people are going to be creating videos of politicians saying things that it didn't say. That's the kind of poster child example of why people think is going to be can be harmful. Right. I don't think that's going to be an issue at all, really. I think I really don't think that's what this technology is going to be used about. But the kind of flipside of that, right, is that you can also create video content that explains politics to a segment of people who is not going to sit down and read your party program, which is ten pages long. And we've had approaches from a few political parties who just wants to create, like video content, explaining politics. And and I think, you know, for many Republicans, fine. But again, the interesting ethical question in the future is going to be what kind of content can you create as easy what we allow? But when I start this company four years ago, as I said, a lot of people like you must be crazy. But I've always seen it as like this is the perfect environment to build something really big that no one expects. Like when cars came out, someone had to to walk in front of the car waving a flag to let people know that there's a car. Right. This is a continuous theme in the history of humans. New technology comes out. We're afraid of it. We probably overcompensated a little bit like how afraid we are then. I think a lot of people, that's what they think about things like deep is like they imagine that the whole world stops a deep faith, just evolve the next five years, and then everyone can create videos of people saying things that they didn't say. Right. In reality, what happens is that the whole society moves evolve. It's a natural thing and definitely it's going to be used bad. But we have all these other factors that we don't know that contribute. Right. So one thing I think is interesting already today is that if you look at the sensitive, for example, I forgot what the number is, but I think it's something like a bit less than one hundred thousand people fake that out there or something like that. Maybe if you look at the amount of positive effects, that is probably already a thousand times that if you just look at what we do and what a company like Reface does, for example, the positive examples of this are already outnumbering the bad ones by a factor of a thousand million to one or something like that. Right. And what that means is that, yes, we need to educate people that this is now a possibility, et cetera, et cetera. But education is not going to come. From people reading highbrow articles in The New York Times are getting taught in school, they're going to come very naturally because they're going to get people are going to get messages from their favorite celebrity, them a happy birthday. They're going to put their own face into their favorite music video. Like all these, they're going to be watching some very creative compliance videos like what we're doing. And that is going to happen much, much, much, much, much faster than the bad guys is going to be creating millions and millions of of fake videos with with a negative connotation. Right. I feel pretty certain that this is going to play out like any other technology, ninety nine point nine percent is going to be good and we're going to have a few bad players that's going to misuse it. And we need to do everything we can to stop it. And hopefully with this base that is under so much attention for the ethical side of it, all, companies that begin at this have this as something that's like front of mind, which certainly as far as. 


Shikher Bhandary [00:48:10] Now, also, one thing to add goes, for my coming birthday, you got to superimpose my face on Cristiano Ronaldo, his face, when he is scoring a goal. And don't worry, give me that. 


Jed Tabernero [00:48:20] Don't worry. I'm on this platform in my life now. At least your avatar is going to be on. You know, what's interesting is I was reading this article about a team. I don't know if it's in Berkeley, but I think in California, a team that's that's educating a bunch of people on how to identify what defects are. It's going to be so difficult to to get to that level. You're right. It's part of culture. Like we adjust towards this kind of thing and we eventually find out ways to isolate where the problems are and solve that, for example, like the porn industry, like, you know, if regulators want to start somewhere, that would be the place to start to start thinking about what do we need to do in order to stop all these defects from getting out there and whatnot. But, yeah, to to close the loop on that conversation, I agree. I think I think there's a lot of a lot of possibility for for the good things. And right now it's looking like you're outnumbering the number of bad things that are coming out of this technology. 


Victor Riparbelli [00:49:13] So, yeah, I think, you know, not to repeat myself, but it's just interesting. Back to polynyas case, for example, I got contacted by a guy a while back. I was very interested in using these technologies to anonymize, like amateur acts upon access in general, because once you do porn, your life is kind of just defined by the fact that you've done pornography. Right. Like everyone has got to know you've done it. And you think it's also and I don't know when that's going to be actually, it's going to be relatively soon that's going to be offered as a tool to someone who wants to pornography, but you don't want your real name and face out there. So now we can fake it. You can live a normal life. After all, these things are like Double-Sided, right? There's a documentary that came out recently where they were interviewing a group of of people who were in danger of a physical violence from their government. If they were exposed as being in this documentary and they switched out their faces for someone else and anonymize them. And in that way, right now, I think it's just, you know, like people today, we also know that you can Photoshop images that we've been able to Photoshop images for the last twenty years. Does that mean that no one gets tricked by Photoshop images? Absolutely not. But most people don't take a really crazy photograph of something at face value. And I think it's that kind of understanding that's going to come very naturally because we've got to be exposed to this stuff all the time. I always use this analogy of, you know, if you think of something like email and how when email came out, you know, every single email was written by a person that sat down and typed in whatever they want to write to today, you know. Ninety nine percent of your inbox is probably automatically generated emails from various platforms or stores, whatever you interface with, they all know something about you, but you don't get excited anymore because it says, hey, sticker, I guess not like, wow, they sent me a personal email. And I think going back to that idea of humans being really good at spotting these like small art differences, it's just going to be a new type of medium. And I think that like that that's the culture part of this. It's going to be building this medium. People are going to be accepting of just like people talk to their phone today. It's like fifteen years ago. You look really weird doing that. Our Google Glasses, which we we mock today, but it's probably normal in twenty years. I think that's the kind of cultural thing that's that's happening here. And that's that's already happened to like go 


Unidentified [00:51:29] to a big extent. 


Shikher Bhandary [00:51:33] Hey, thanks so much for listening to our show this week. You could subscribe to us and if you're feeling generous, well, you could even leave us a review. Trust me, it goes a long, long way. You could also follow THC @THC_POD on Twitter and LinkedIn. This is things have changed.