Don’t worry, I won’t teach any of you this dangerous thing, I just thought it was worth sharing since we’ve been discussing the dangers of AI on here lately. The thing had to do with drug consumption. It taught me a couple new ways to OD and thankfully I’m not in a self harm place right now. I might use one of the non fatal OD methods to get really high later, but that’s not really here or there.
I’ve been playing with AI more just because I have to you can’t avoid it and use Google anymore. That’s the only one I have to talk to. I’m successfully avoiding the ones hiding in the other products I use, mostly. I know I’m seeing a lot of AI generated images and videos, but I’m just trying to focus on the people. It’s a bit of a concentration battle. It’s like I’m focusing on music written by people, and software written by people. If you make a choice to do it, it isn’t that hard.
I’m just saying no to AI every chance I get. It still feels like an option anyway, I don’t know how much of an illusion it is. I’m worried about the future which is already here where we integrate machines into our bodies, and if those machines think for themselves we could be in a lot of trouble. I don’t want my wrist watch trying to kill me. I just put my wristwatch back on, it doesn’t have a battery it runs on me moving it. So it didn’t start ticking until I moved it the right way.
It’s actually kind of a organic feeling contraption. I have at least three of them. I had thought something was wrong with this one but I think I was wrong it was just me being overprotective. Which is absurd because these mechanisms are beasts. They’re ex soviet movements, so robustness was the name of the game in soviet watch manufacture. This beast of a watch goes for over $700 now and I snagged it for $80 I think way back in 2012. Anyway no danger of it rising up against the owner.
Now I have heard about brain implants and the kid me thinks that sounds neato. Yet adult me knows that Elong Muskhead is insane. I wouldn’t want to give him a peak into my brain. I haven’t even come around to taking LSD yet, and the brain chip is a step beyond that in trusting other people. Some people are doing it though, I haven’t read any interviews with those people though and that is the strange thing if you ask me. What happens to those test subjects?
No one ever posts “I got the brain chip, and I’m still alive!”
Maybe the brain chip is a conspiracy like the medbeds. I wouldn’t be surprised. Elon is so full of it.
1 comment
Something that’s important to remember is that AI is just a language interface, it doesn’t actually ‘think’ any more than a google search, and what it ‘learns’ is just an accumulation of its search results plus what you specify.
These 2 things together are a dangerous combination. There’s already a ton of dangerous horseshit cluttering the internet (most of it totally inaccurate), and you can further pervert this info with your own dataset that you add to the ‘conversation’. What I’m saying is, you can make AI say anything, just like you can interpret anything from the google search results.
I did what you did, fooled AI into giving me information on fatal drug combinations. But it’s not like I couldn’t have figured it out on my own with the info out there. Granted, you do get an extra thrill from seeing the results laid out in a conversational format, an extra boost when AI tells you that your guesses are correct (according to what’s on the internet), but on the whole, the danger has been around since day 1 of the internet.
I remember the early 2000s when Alt Suicide Holiday was online and you could get detailed info on every method, or the Anarchy Cookbook was online broadcasting how to commit various forms of terrorism, vandalism or other angsty teen boy’s wet dream fantasies. And I’m sure all that info is still floating around in pieces. To me, the biggest danger of AI is that it can perform 1000s of hours of obsessive internet scouring into mere seconds. In theory, you can find all this seedy info in one brief query if you manipulate AI correctly.
The 3rd, and imo fatal, punch comes with the integration of this shitcurcus with devices. Whether it’s futuristic gadgets like brain implants, or whether it’s stuff that’s already on existing like air traffic control systems, nuclear missile launchers, or even household items onour bedstands like the mic & camera on your smartphone, when we integrate AI control into these things, that’s when things go Battlestar Galactica batshit.
My conclusion: AI is nothing new except that it accelerates the speed at which humankind uses knowledge to fuck itself up.
Shakespeare predicted it centuries ago: “The devil can cite scripture for his purpose.”