Makes me wonder how much of Tesla’s “Full Self Driving” is just some dude playing GTA VR with you in the passenger seat.
Is the future just having a human slave in a third world country strap into VR and carry your groceries for you?
Whoops I missed your reply. Cheers!
I’ve been thinking about getting one. Which model do you have, and how long have you had it?
Call Me Maybe, such a banger
I assumed it was trying to feast on the goo inside.
Do you know any one else that has gone through all you have and ended up where you are?
And when you go through the door, you must know the language to speak (the protocol) or you may be told to leave or ignored.
Thanks very much for that, I really appreciate it! How have you found your DF64?
The vendor (df64coffee.com) say they align the burrs. Would they need further alignment?
Thanks, I’m looking forward to it! But also a little nervous that I won’t be able to tell the difference. 😅
I’m using a Breville Smart Grinder Pro that I modified to be single dose. I just ordered a DF64 Gen 2, but it hasn’t arrived yet.
I love the way the Tree is automatically created. Having the ability to rename the root on the occasion where it’s not intuitive, would be perfect.
Disagree. Even if it’s Sunday, the rule still applies.
Just switch out to the sheep. Can always switch back later. You deserve it.
Taking a cat bowl full of water back to its spot on the floor. I remember I am the water, and it gets delivered without spilling.
cross-posted from: https://lemm.ee/post/2563362
> !raccoons@lemmy.world
cross-posted from: https://lemm.ee/post/2563362
> !raccoons@lemmy.world
cross-posted from: https://lemmit.online/post/225981
> ##### This is an automated archive made by the Lemmit Bot. > The original was posted on /r/homeassistant by /u/janostrowka on 2023-07-19 12:49:02. >
Hopefully this will come in handy for our Year of the Voice.
TL;DR: Justin Alvey replaces Google Nest Mini PCB with ESP32 custom PCB which he’s open-sourcing. Shows demo of running LLM voice assistant paired with Beeper to send and receive messages.
Tweet text thread (I would also highly recommend checking out the video demos on Twitter): > I “jailbroke” a Google Nest Mini so that you can run your own LLM’s, agents and voice models. Here’s a demo using it to manage all my messages (with help from @onbeeper) 📷 on, and wait for surprise guest! I thought hard about how to best tackle this and why > >After looking into jailbreaking options, I opted to completely replace the PCB. This let’s you use a cheap ($2) but powerful & developer friendly WiFi chip with a highly capable audio framework. This allows a paradigm of multiple cheap edge devices for audio & voice detection… > >& offloading large models to a more powerful local device (whether your M2 Mac, PC server w/ GPU or even "tinybox"!) In most cases this device is already trusted with your credentials and data so you don’t have to hand these off to some cloud & data need never leave your home > >The custom PCB uses @EspressifSystem's ESP32-S3 I went through 2 revisions from a module to a SoC package with extra flash, simplifying to single-sided SMT (< $10 BOM) All features such as LED’s, capacitive touch, mute switch are working, & even programmable from Arduino (/IDF) > >For this demo I used a custom “Maubot” with my @onbeeper credentials (a messaging app which securely bridges your messaging clients using the Matrix protocol & e2e encryption) which runs locally serving an API > >I’m then using GPT3.5 (for speed) with function calling to query this > >Fro the prompt I added details such as family & friends, current date, notification preferences & a list additional character voices that GPT can respond in. The response is then parsed and sent to @elevenlabsio > >I've been experimenting with multiple of these, announcing important messages as they come in, morning briefings, noting down ideas and memos, and browsing agents. I couldn’t resist - here's a playful (unscripted!) video of two talking to each other prompted to be AI’s from "Her > >I’m working on open sourcing the PCB design, build instructions, firmware, bot & server code - expect something in the next week or so. If you don't want to source Nest Mini's (or shells from AliExpress) it's still a great dev platform for developing an assistant! Stay tuned!
I’m not an artist and I created this with AI. I’m not submitting it, but posting it here as possible inspiration to any real artists.
Please forgive any compression artefacts, I had to shrink it due to file size limits on my Lemmy instance.
Edited for clarity.
In Home Assistant 2023.7 a feature was added to allow services to provide a response.
> This release brings in a change to Home Assistant, which we consider to be one of the biggest game changers of the past years: Services can now respond with data! 🤯 > > It is such a fundamental change, which will allow for many new use cases and opens the gates for endless possibilities.
In this release the functionality has only been enabled for a couple of services, but I’m having trouble picturing what we can use this for now and in the future.
What are some use cases you can think of on how to use this new feature?
Why YSK: It will start working faster if you give it a sec.
This is a bug in iOS Progressive Web Apps. Your scrolling gets locked to an element that is off the screen. Continuing to try to scroll while it’s in this locked state keeps it in the state longer. No interactions for one second will unlock it.
wefwef are tracking the issue here.
Thank you to Apollo, Gavin Nelson and Matthew Skiles for the beautiful avatar and banner pictures.