Thatās just googling with several wildly pointless extra steps of also googling.
Itās not always easy to distinguish between existentialism and a bad mood.
Thatās just googling with several wildly pointless extra steps of also googling.
Thatās a good way to put it. Another thing that was really en vogue at one point and might have been considered hard-ish scifi when it made it into Rifters was all the deep water telepathy via quantum brain tubules stuff, which now would only be taken seriously by wellness influencers.
not a fan of trump for example
In one the Eriophora stories (I think itās officially the sunflower circle) I think thereās a throwaway mention about the Kochs having been lynched along with other billionaires on the early days of a mass mobilization to save whatās savable in the face of environmental disaster (and also rapidly push to the stars because a Kardashev-2 civilization may have emerged in the vicinity so an escape route could become necessary in the next few millenia and this scifi story needs a premise).
Explaining in detail is kind of a huge end-of-book spoiler, but āAll communication is manipulativeā leaves out a lot of context and personally I wouldnāt consider how itās handled a mark against Blindsight.
Sentience is overrated
Not sentience, self awareness, and not in a parĻicularly prescriptive way.
Blindsight is pretty rough and probably Wattās worst book that Iāve read but itās original, ambitious and mostly worth it as an introduction to thinking about selfhood in a certain way, even if this type of scifi isnāt oneās cup of tea.
Itās a book that makes more sense after the fact, i.e. after reading the appendix on phenomenal self-model hypothesis. Which is no excuse ā cardboard characters that are that way because the author is struggling to make a point about how intelligence being at odds with self awareness would lead to individuals with nonexistent self-reflection that more or less coast as an extension of their (ultrafuturistic) functionality, are still cardboard characters that you have to spend a whole book with.
I remember he handwaves a lot of stuff regarding intelligence, like at some point straight up writing that what you are reading isnāt really whatās being said, itās just the jargonaut pov character dumbing it way down for you, which is to say he doesnāt try that hard for hyperintelligence show-donāt-tell. Echopraxia is better in that regard.
It just feeds right into all of the TESCREAL nonsense, particularly those parts that devalue the human part of humanity.
Not really, there are some common ideas mostly because tesrealism already is scifi tropes awkwardly cobbled together, but usually what tescreals think is awesome is presented in a cautionary light or as straight up dystopian.
Like, thereās some really bleak transhumanism in this book, and the view that human cognition is already starting to become alien in the one hour into the future setting is kind of anti-longtermist, at least in the sense that the utilitarian calculus turns way messed up.
And also I bet thereās nothing in The Sequences about Captain Space Dracula.
No, just replace all your sense of morality with utilitarian shrimp algebra. If you end up vegetarian, so be it.
Hopefully the established capitalists will protect us from the fascistsā worst excesses hasnāt been much of a winning bet historically.
Itās not just systemic media head-up-the-assery, thereās also the whole thing about oil companies and petrostates bankrolling climate denialism since the 70s.
The way many of the popular rat blogs started to endorse Harris in the last second before the US election felt a lot like an attempt at plausible deniability.
This reference stirred up some neurons that really hadnāt moved in a while, thanks.
I think the author is just honestly trying to equivocate freezing shrimps with torturing weirdly specifically disabled babies and senile adults medieval style. If you said youād pledge like 17$ to shrimp welfare for every terminated pregnancy Iām sure theyād be perfectly fine with it.
I happened upon a thread in the EA forums started by someone who was trying to argue EAs into taking a more forced-birth position and what it came down to was that it wouldnāt be as efficient as using the same resources to advocate for animal welfare, due to some perceived human/chicken embryo exchange rate.
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. Itās not just because theyād be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldnāt be smart later, or, in the case of the cognitively enfeebled whoād be permanently mentally stunted.
wat
This almost reads like an attempt at a reductio ad absurdum of worrying about animal welfare, like you are supposed to be a ridiculous hypocrite if you think factory farming is fucked yet are indifferent to the cumulative suffering caused to termites every time an exterminator sprays your house so it doesnāt crumble.
Relying on the mean estimate, giving a dollar to the shrimp welfare project prevents, on average, as much pain as preventing 285 humans from painfully dying by freezing to death and suffocating. This would make three human deaths painless per penny, when otherwise the people would have slowly frozen and suffocated to death.
Dog, youāve lost the plot.
FWIW a charity providing the means to stun shrimp before death by freezing as is the case here isnāt indefensible, but the way itās framed as some sort of an ethical slam dunk even compared to say donating to refugee care just makes it too obvious youād be giving money to people who are weird in a bad way.
No shot is over two seconds, because AI video canāt keep it together longer than that. Animals and snowmen visibly warp their proportions even over that short time. The trucksā wheels donāt actually move. Youāll see more wrong with the ad the more you look.
Not to mention the weird AI lighting that makes everything look fake and unnatural even in the adās dreamlike context, and also that itās the most generic and uninspired shit imaginable.
His overall point appears to be that a city fully optimized for self-driving cars would be a hellscape at ground level, even allowing for fewer accidents, so no real reason to belabor that point, which is mostly made in service to pointing out how dumb it is when your solution to reducing accident rates is ābuy a new carā instead of anything systemic. like improving mass transit.
If youāve convinced yourself that youāll mostly be fighting the AIs of a rival always-chaotic-evil alien species or their outgroup equivalent, you probably think they are.
Otherwise I hope shooting first and asking questions later will probably continue to be frowned upon in polite society even if itās automated agents doing the shooting.
The job site decided to recommend me an article calling for the removal of most human oversight from military AI on grounds of inefficiency, which is a pressing issue since apparently weāre already living in the Culture.
The Strategic Liability of Human Oversight in AI-Driven Military Operations
Conclusion
As AI technology advances, human oversight in military operations, though rooted in ethics and legality, may emerge as a strategic liability in future AI-dominated warfare.
Oh unknowable genie of the sketchily curated datasets Claude, come up with an optimal ratio of civilian to enemy combatant deaths that will allow us to bomb that building with the giant red cross that you labeled an enemy stronghold.
Maybe Momoaās PR agency forgot to send an appropriate tribute to Alphabet this month.
I could go over Wolframās discussion of biological pattern formation, gravity, etc., etc., and give plenty of references to people whoāve had these ideas earlier. They have also had them better, in that they have been serious enough to work out their consequences, grasp their strengths and weaknesses, and refine or in some cases abandon them. That is, they have done science, where Wolfram has merely thought.
Huh, it looks like Wolfram also pioneered rationalism.
Scott Aaronson also turns up later for having written a paper that refutes a specific Wolfram claim on quantum mechanics, reminding us once again that very smart dumb people are actually a thing.
As a sidenote, if anyone else is finding the plain-text-disguised-as-an-html-document format of this article a tad grating, your browser probably has a reader mode that will make it way more presentable, itās F9 on firefox.
It might be just the all but placeholder characters that give it a b-movie vibe. Iād say itās a book thatās both dumber and smarter that people give it credit for, but even the half-baked stuff gets you thinking. Especially the self-model stuff, and how problematic it can be to even discuss the concept in depth in languages that have the concept of a subject so deeply baked in.
I thought that at worst one could bounce off to the actual relevant literature like Thomas Metzingerās pioneering, seminal and terribly written thesis, or Sackās The Man Who Mistook His Wife For A Hat.
Blindsight being referenced to justify LLM hype is news to me.