Brain-computer/machine interfaces are really interesting when treating conditions like paralysis or Parkinson's disease, and to a certain extent severe psychiatric conditions if you count deep brain stimulation for e.g. severe OCD. I don't think we'll be anywhere near sending detailed multisensory content like ads into people's brains for a long time though. That's so far outside the scope of what brain stimulation can do right now, it's really just scifi.
Technology progresses pretty fast today, especially where there's money. Who knows, maybe in a decade or so it'll already be possible, especially if this goes mainstream.
It does, but it's important to note that the theoretical basis for much of the rapid progress we're seeing now (e.g. machine learning) has actually existed for quite a long time. Training very large models wasn't feasible at the time they were theorised, but the basis for them did exist.
When it comes to brains, we don't even have a good understanding of how multisensory integration works yet, let alone how we could, even in theory, implant multisensory impressions like ads. It's much easier with things like movement disorders or paralysis because our understanding of those phenomena is much more advanced. Plus - we're only really dealing with one modality there - movement.
Deep brain stimulation for psychiatric conditions does exist, but it's poorly understood, to the point where there isn't even really a consensus on where you should place the stimulating electrodes for the best effects. At least that's what a colleague who worked on DBS described a while ago, and I doubt it would've changed dramatically in a year.
10 years, it'll be possible to have a multisensory chip, but it'll be super expensive and niche
15 years, it'll start getting pretty affordable and popular
20 years, you'll be a social outcast for not having a chip. What are you, a weirdo?
Yeah, but just a couple years ago, you could've said the same thing about AI, and now it's everywhere. So, we could be just a couple years from brain chips that make us execute order 66 and kill all the Jedi.
Perturbed gradient descent with backprop is what we were doing in the 90s. It feels like there are some new tricks but mostly what I see is the result of GPUs and cheap memory.
I volunteer at a summer science camp and 90% of the projects are "I pointed AI at this problem and...", nobody seems to be even trying for analytical approaches any more. I'm ready for a new fad.
Anybody else remember when it was all wavelets all the time? That was kinda fun.