People still need to know what was said. Presumably their AI clone can send them a quick summary.
And they have to give their AI clone instructions. I guess you can just give it a few points it needs to mention and who to tell them to.
It seems to me like you could send the instructions to the people who need to read them and skip the part where a bunch of AIs translate it into hours of video and back. Though the AI clone thing does give you a way to deal with that guy who loves the sound of his own voice.
In all seriousness, the potential use cases for this are more useful for senior management than for the employees who actually have to have the meetings. Being able to have your avatar sit in on a meeting and get a condensed transcript to skim later still gives you a more accurate idea about what was actually said in the meeting than a report or the meetings minutes. The AI doesnt have an axe to grind on bias.
You, know what - this might actually be useful. People were complaining about not being involved in decision making, so I have to run a monthly meeting where people will either sit contributing nothing even when asked a direct question, or insist on bike shedding the most unimportant details. If the meeting is a bunch of AI homunculi then it'll be quicker at least
He wants to take on Microsoft and Google in the enterprise software market by making docs and email and other productivity tools like chat.
You’ll hear him describe how he thinks one of the big benefits of AI at work will be letting us all create something he calls a “digital twin” — essentially a deepfake avatar of yourself that can go to Zoom meetings on your behalf and even make decisions for you while you spend your time on more important things, like your family.
You read emails, send a chat message, make a phone call, have a whiteboard session, schedule something with external third parties.
I personally have the doubt: “Okay, maybe this underlying technology, the large language model, isn’t as stable a foundation to build the vision that you’re describing.” How do you overcome that?
Even as fast as model capabilities are increasing, I don’t know that hallucinations going down is a metric that anyone can see a Moore’s law-type approach to, where we know it will hit an acceptable point.
Suppose in our meeting client’s workplace six-star release, we have a very cool feature I play around [with], which is to leverage AI to generate the virtual background.
The original article contains 9,756 words, the summary contains 201 words. Saved 98%. I'm a bot and I'm open source!