Z4rK @ Z4rK @lemmy.world Posts 28Comments 360Joined 2 yr. ago
Lol thank you autocorrect. Ollama.
Ok, I just don’t see the relevance to this post then. Sure, you’re fine to rant about Apple in any thread you want to, it’s just not particularly relevant to AI, which was the technology in question here.
I hear good things about GrapheneOS but just stay away from it because of all the stranger. I love Olan’s.
- Security / privacy on device: Don’t use devices / OS you don’t trust. I don’t see what difference on-device AI have at all here. If you don’t trust your device / OS then no functionality or data is safe.
- Security / privacy in the cloud: The take here is that Apples proposed implementation is better than 99% of every cloud service out there. AI or not isn’t really part of it. If you already don’t trust Apple then this is moot. Don’t use cloud services from providers you don’t trust.
Security and privacy in 2024 is unfortunately about trust, not technology, unless you are able to isolate yourself or design and produce all the chips you use yourself.
They have designed a very extensive solution for Private Cloud Computing: https://security.apple.com/blog/private-cloud-compute/
All I have seen from security persons reviewing this is that it will probably be one of the best solutions of its kind - they basically do almost everything correctly, and extensively so.
They could have provided even more source code and easier ways for third parties to verify their claims, but it is understandable that they didn’t, is the only critique I’ve seen.
To be honest, I’m not sure what we’re arguing - we both seem to have a sound understanding of what LLM is and what it is not.
I’m not trying to defend or market LLM, I’m just describing the usability of the current capabilities of typical LLMs.
It goes a tad bit beyond classical conditioning... LLM’a provides a much better semantic experience than any previous technology, and is great for relating input to meaningful content. Think of it as an improved search engine that gives you more relevant info / actions / tool-suggestions etc based on where and how you are using it.
Here’s a great article that gives some insight into the knowledge features embedded into a larger model: https://transformer-circuits.pub/2024/scaling-monosemanticity/
That’s fair, but you are misunderstanding the technology if you’re bashing the AI from Apple for making macOS less secure. Most likely, it will be just as secure as for example their password functionality, although we don’t have details yet. You either trust the OS or not.
Microsoft Recall was designed so badly, there’s no hope for it.
macOS and Windows could already be doing this today behind your back regardless of any new AI technology. Don’t use an OS you don’t trust.
That’s why it’s on the OS-level. For example, for text, it seems to work in any text app that uses the standard text input api, which Apple controls.
User activates the “AI overlay” on the OS, not in the app, OS reads selected text from App and sends text suggestions back.
The App is (possibly) unaware that AI has been used / activated, and has not received any user information.
Of course, if you don’t trust the OS, don’t use this. And I’m 100% speculating here based on what we saw for the macOS demo.
Yes definitely, Apple claimed that their privacy could be independently audited and verified; we will have to wait and see what’s actually behind that info.
To be fair, I think he is mostly endorsing the concept of the implementation, lined out in his seven points, not the actual implementation since it isn’t available yet.
He sort of invented it, so you have to think he’s commenting on the concept here, not the implementation.
I have tried a lot of medium and small models, and there it just no good replacement for the larger ones for natural text output. And they won’t run on device.
Still, fine-tuning smaller models can do wonders, so my guess would be that Apple Intelligence is really 20+ small and fine tuned models that kick in based on which action you take.
Well they just name-grabbed all of AI with their stupid Apple Intelligence branding.
I do agree, but privacy in 2024 is sadly about trust, not technology, unless you yourself can design and create every chip used in your devices and in the network cells you connect to. No setting on your device on “do not allow…” have any meaning without trust in the creator.
Unless you are designing and creating your own chips for processing, networking etc, then privacy today is about trust, not technology. There’s no escaping it. I know iPhone and Apple is collecting data about me. I currently trust them the most on how they use it.
Hehe no it was actually surprisingly light when I looked it up, sorry for being lazy and just referencing it still.
He’s just one of the top 3-10 AI scientists in the world. If you want to start up a groundbreaking new AI research company, he’d probably be top 3 on your headhunting list. Anyone of Google, Microsoft, Meta, Apple, Tesla etc would hire him asap if he was available, probably as their new chief AI scientist.
But yeah, my whole title was fairly narrow in only making sense for people who already knew who he is. Maybe it would have made more sense to most if I just said “Top AI scientist endorses AI” or something. Uh, without abbreviating Apple Intelligence to AI I guess. I hate their naming on this.
I mean, that’s fair, I personally use Apple devices specifically because I trust them the most on privacy, but if you don’t trust Apple with privacy, which is a 100% valid take to have, then of course this mayor selling point of their marketing becomes moot.
How so? Many people want to use AI in privacy, but it’s too hard for most people to set it up for themselves currently.
Having AI tools on the OS level so you can use it in almost any app and that is guaranteed to be processed on device in privacy will be very useful if done right.
F me if I do / F me if I don’t?
You don’t have to like him. He’s still regarded as one of the most capable and knowledgeable person in the AI space and a thought leader, so to most his words in that area will be listened to. There are almost zero other persons Apple could have hoped more for an endorsement from.
Care to elaborate?
The suspicious parts to me was that they didn’t show much of the private cloud stuff, how much it would cost, and that they still feel the need to promote ChatGPT .
Christian Selig launched Pixel Pals 2 today