RemindMe! 3y “reply to this thread”.
RemindMe! 3y “reply to this thread”.
Anyone who has played the ‘Risk’ board game has an idea what the next moves should be.
Easy fix.
Tweezers.
When you realize how many wars were averted because of them.
Good read. Sad ending that all that work ended up nowhere.
The Gradient Descent, Hallucination, and Insufficient Training Data jokes just write themselves.
NEW feature: As you drive down the road, Ford cars will automatically take over and drive you to the nearest sponsor location. Hungry? It will take over and swerve into the nearest KFC drive-thru. Next stop, CVS pharmacy, then Office Depot.
Disclaimer: Disabling AutoAd feature requires monthly subscription.
When this whole ‘training’ trend started a few years ago, there were companies offering image and video labelling services.
It turned out they were mostly sweatshops in low-income countries, where people sat in front of monitors and just dragged boumding boxes around sections of images and picked from an icon menu. Here’s a car, here’s a person, here’s an apple. That sort of thing. You didn’t even need to know how to read or write.
Of course, the quality was questionable, so they needed a second layer of supervisors verifying the choices. But even with that, the cost was way lower than having an engineer or QA person do it. IIRC, there was a bit of hue and cry when stories came out of big tech companies supporting sweatshop conditions.
Sounds like it’s still ongoing.
VW announced a $5B joint venture with Rivian a couple months ago. Wonder how all this will affect that deal?
https://www.theverge.com/2024/6/25/24185946/vw-rivian-joint-venture-investment-software-r2
Where you rotate so far right you end up at the left.
https://www.espressif.com/en/news/ESP32-S3-BOX-3
There’s a model with a more expensive dock, or one without. The one without worked fine. But it had to be the Box 3 not Box 2. It worked pretty well and you could create custom images to indicate whether it was listening, thinking, etc.
Instructions here: https://www.home-assistant.io/voice_control/s3_box_voice_assistant/
The box isn’t powerful enough to run an LLM itself. It’s just good enough as an audio conduit. You can either use their cloud integration with ChatGPT, or now, Anthropic Claude. But if you had a powerful Home Assistant server, say an Nvidia Jetson or a PC with a beefy Nvidia GPU, you could run local models like Llama and have better privacy.
This is from earlier this year. I imagine they’ve advanced more since then.
Their LLM integration is super cool. I messed with it for a previous job. Way better than Alexa or Google Home.
One Docker env variable and one line of code. Not a heavy lift, really. And next time I shell into the container I don’t need to remind everyone to activate the venv.
Creating a venv in Docker just for the hell of it is like creating a symlink to something that never changes or moves.
NEW, automated children’s bicycle. Guaranteed to teach the little tyke how to ride! *
I can think of only two reasons to have a venv inside a container:
If you’re running third-party services inside a container, pinned to different Python versions.
If you do local development without docker and scripts that have to activate the venv from inside the script. If you move the scripts inside the container, now you don’t have a venv. But then it’s easy to just check an environment variable and skip, if inside Docker.
For most applications, it seems like an unnecessary extra step.
They missed speculation, hearsay, and guesstimation.
“Team-based shooter eight years in the making had just 25,000 estimated sales.”
https://youtu.be/tANavEbnKsU
Finland edition, where it apparently started. Action starts at 4:20m in.
Good, wholesome fun.