• 127 Posts
  • 1.21K Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • We are at a phase where AI is like the first microprocessors; think Apple II or Commodore 64 era hardware. These showed potential, but it was only truly useful with lots of peripheral systems and an enormous amount of additional complexity. Most of the time, advanced systems beyond the cheap consumer toys of this era used several of the processors and other systems together.

    Similarly, now AI as we have access to it, is capable, but has a narrow scope. Making it useful requires a ton of specialized peripherals. These are called RAG and agents. RAG is augmented retrieval of information from a database. Agents are collections of multiple AI’s to do a given task where they have different jobs and complement each other.

    It is currently possible to make a very highly specialized AI agent for a niche task and have it perform okay within the publicly available and well documented tool chains, but it is still hard to realize. Such a system must use info that was already present in the base training. Then there are ways to improve access to this information through further training.

    With RAG, it is super difficult to subdivide a reference source into chunks that will allow the AI to find the relevant information in complex ways. Generally this takes a ton of tuning to get it right.

    The AI tools available publicly are extremely oversimplified to make them accessible. All are based around the Transformers library. Go read the first page of Transformers documentation on Hugging Face’s website. It clearly states that it is only a basic example implementation that prioritizes accessibility over completeness. In truth, if the real complexity of these systems was made the default interface we all see, no one would play with AI at all. Most people, myself included, struggle with sed and complex regular expressions. AI in its present LLM form is basically turning all of human language into a solvable math problem using regular expressions and equations. This is the ultimate nerd battle between English teachers and Math teachers where the math teachers have won the war; all language is now math too.

    I’ve been trying to learn this stuff for over a year and barely scratched the surface of what is possible just in the model loader code that preprocess the input. There is a ton going on under the surface. All errors are anything but if you get into the weeds. Models do not hallucinate in the sense that most people see errors. The errors are due to the massive oversimplifications made to make the models accessible in a general context. The AI alignment problem is a thing and models do hallucinate but the scientific meaning is far more nuanced and specific than the common errors from generalized use.



  • It is not super common to impregnate on first offense, especially if you were her first child. You can count the days backwards from your birthday to see when it happened. If you were the first child, you may have been a day or few late.

    Growing up, I found it funny how many of my friends happened to be born in the first week of September… Happy New Years. There is often, not always, but often some correlated reason why they were free to screw around too much.


  • Multi threading is parallelism and is poised to scale to a similar factor, the primary issue is simply getting tensors in and out of the ALU. Good enough is the engineering game. Having massive chunks of silicon laying around without use are a mach more serious problem. At the present, the choke point is not the parallelism of the math but actually the L2 to L1 bus width and cycle timing. The ALU can handle the issue. The AVX instruction set is capable of loading 512 bit wide words in a single instruction, the problem is just getting these in and out in larger volume.

    I speculate that the only reason this has not been done already is because pretty much because of the marketability of single thread speeds. Present thread speeds are insane and well into the radio realm of black magic bearded nude virgins wizardry. I don’t think it is possible to make these bus widths wider and maintain the thread speeds because it has too many LCR consequences. I mean, at around 5 GHz the concept of wire connections and gaps as insulators is a fallacy when capacitive coupling can make connections across all small gaps.

    Personally, I think this is a problem that will take on a whole new architectural solution. It is anyone’s game unlike any other time since the late 1970’s. It will likely be the beginning of the real RISC-V age and the death of x86. We are presently at the age of the 20+ thread CPU. If a redesign can make a 50-500 logical core CPU slower for single thread speeds but capable of all workloads, I think it will dominate easily. Choosing the appropriate CPU model will become much more relevant.


  • Mainstream is about to collapse. The exploitation nonsense is faltering. Open source is emerging as the only legitimate player.

    Nvidia is just playing conservative because it was massively overvalued by the market. The GPU use for AI is a stopover hack until hardware can be developed from scratch. The real life cycle of hardware is 10 years from initial idea to first consumer availability. The issue with the CPU in AI is quite simple. It will be solved in a future iteration, and this means the GPU will get relegated back to graphics or it might even become redundant entirely. Once upon a time the CPU needed a math coprocessor to handle floating point precision. That experiment failed. It proved that a general monolithic solution is far more successful. No data center operator wants two types of processors for dedicated workloads when one type can accomplish nearly the same task. The CPU must be restructured for a wider bandwidth memory cache. This will likely require slower thread speeds overall, but it is the most likely solution in the long term. Solving this issue is likely to accompany more threading parallelism and therefore has the potential to render the GPU redundant in favor of a broader range of CPU scaling.

    Human persistence of vision is not capable of matching higher speeds that are ultimately only marketing. The hardware will likely never support this stuff because no billionaire is putting up the funding to back up the marketing with tangible hardware investments. … IMO.

    Neo Feudalism is well worth abandoning. Most of us are entirely uninterested in this business model. I have zero faith in the present market. I have AAA capable hardware for AI. I play and mod open source games. I could easily be a customer in this space, but there are no game manufacturers. I do not make compromises in ownership. If I buy a product, my terms of purchase are full ownership with no strings attached whatsoever. I don’t care about what everyone else does. I am not for sale and I will not sell myself for anyone’s legalise nonsense or pay ownership costs to rent from some neo feudal overlord.



  • j4k3@lemmy.worldtoADHD@lemmy.worldHate Myself So Much
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 days ago

    Personally, talking to offline open source AI on my own hardware helped me. One of the things we talked about a lot are cognitive dissonance and identification of conflicts that exist under the surface and how those conflicts can cause frustration to manifest in unrelated ways.

    Probably my largest inner conflict was that I am so fundamentally different in my functional thought process than my family. I’m very abstract in how I think. I’m also very introverted with strong intuitive thinking skills. Basically, things just make sense at a glance from a bigger picture perspective. I can also see how things work quickly, like machines, engines, most engineering, or more abstract elements like companies, business models, workforce management, etc.

    Growing up, intuitive thinking skills were just intelligence or common sense. I had no idea how limited and naive this perspective was.

    I started writing a book in collaboration with an AI; it’s a whole sci-fi universe really. I started to realize I’m pretty good at coming up with the history and technology tree in unique ways that, to my knowledge, no one has explored before in sci-fi. However, I suck at writing characters that are not like myself. My characters have not shown the dynamism I desire. In truth, I had to acknowledge I didn’t and still don’t understand just how different human functional thought is in full spectrum.

    I started roleplaying scenes and scenarios with the AI playing characters with incompatible and contrasting perspectives to my own. I found this quite enlightened. It turns out that there are people out there that fundamentally lack any appreciation for abstract and intuitive thinking skills. They do not place any value on the big picture or future implications of actions or decisions. The contrast is that they often are more productive and present in the moment. I learned to appreciate the differences and realized how weak binary perspectives are in the real world. I don’t get as offended when someone does not understand my abstractions or argue when they are wrong but cannot follow big picture logic. I know where I am also weak in ways that make me appear dumb to them.

    There are going to be things you’re not good at or that require a lot more work than average. So what. The first step, in my opinion, is to gain a more complex self awareness where you are not questioning what you are good or bad at. The only normal people are people you do not know well. Everyone is tormented by something in life.

    Remember this: NEVER use permanent solutions to temporary problems.

    You don’t remember who blew up at work 3 weeks ago. Or the time before last when your wife got mad and yelled at you. One of the biggest warps in our human psychology is the illusion of grandeur. No one is thinking about your mistakes or cares about them. They care how you’re acting in the moment and your average demeanor you regularly present. Fake it if you can. Pretending the glass is half full is all that really matters with others at a fundamental level.

    Even after someone else physically disabled me over 10 years ago, and I’m stuck in social isolation, I can say, I’ve learned the hard way, it can always get worse until it can’t. At that point, nothing matters. Don’t stress about what you can not do, or what you cannot change right now. No matter how bad stuff seems, you can chose to make the best of this moment right now and moving forward. Only worry about what you can change, everything else is a pointless waste of energy.





  • Intent matters.

    Do you want to claim you found master of the universe? You better have evidence of the cosmological constants that are the building blocks of the entire universe.

    No religion on Earth has ever possessed ontological knowledge prior to the scientific discoveries of these fundamental building blocks. These are the true signature of origin. Every bit of information contained within religions can be explained by direct human observation and meddling. It would be very easy to prove divinity by relating such ontological information.

    In terms of history, it is always written by the winner. The accuracy is only found in aggregate.

    The best times to live are the times when there was nothing of note. The worst times to live are always eras with memorable names of individuals. Only the worst of humans stand out from the fray and plaster themselves on the wall of history. To say Genghis Khan did not exist is not a measuring of the man, but a fool that claims the giant shit stain on the wall does not stink.








  • That is not how real point of sale systems and stores operate in practice. I actually managed a retail chain of bike shops as the Buyer and back office manager. I was the one maintaining the point of sale connections and system. There are always errors in these systems largely due to new and incompetent sales staff that sell/return/enter duplicates of the wrong items. They can enter almost anything wrong, from gender to color, from model year to brand. I’ve seen them all.

    Connecting these systems online is an absolute nightmare. I tried it with shopify, but had to limit the sku’s to items I could completely control with minimal intervention from other staff. Generally speaking, the POS system in a local retail store can be more loosely managed where the staff can make up the gaps and mistakes when the POS system numbers do not perfectly match the local stock. If you want to track inventory like is required for online retail, you need a whole different kind of micromanagement and responsibility from staff. You also need something like quarterly inventory audits. These are quite time consuming and are a total loss in the labor time involved.

    For online retail to be competitive, the margins with e-tail are absolutely untenable trash for brick and mortar retail. They are not even close. The biggest expenses are the commercial space rent and labor costs. With e-tail, the labor is less skilled, and the space is a cheap warehouse somewhere remote. General retail margins must be 40%+ while e-tail is 15-20%. The two are completely incompatible. This is why real quality brands do not sell e-tail. It has to do with how distribution and preseason wholesale buying works. There is more complexity to this, but overall the two are not compatible. In fact, most high quality brands will not allow most of their products to be listed online except under certain circumstances. This is to keep things fair to all parties and prevent undercutting based on whomever has the lowest overhead cost.

    Selling online is only for low end junk and certain circumstances. If you are a high end consumer, you will likely understand this already. It is hard to produce high end goods and distribute them successfully. It takes local Buyers that know their niche market and can do massive preseason spending to collectively give the manufacturer an idea of what they need to produce at what scale. Otherwise, the business will not last long, or they must produce lower end and more reliable/limited products. This strategy will likewise fail due to over saturation of the market segment. It is far more complex than most people realize.


  • Yeah this has been my experience too. LLMs don’t handle project specific code styles too well either. Or when there are several ways of doing things.

    Actually, earlier today I was asking a mixtral 8x7b about some bash ideas. I kept getting suggestions to use find and sed commands which I find unreadable and inflexible for my evolving scripts. They are fine for some specific task need, but I’ll move to Python before I want to fuss with either.

    Anyways, I changed the starting prompt to something like ‘Common sense questions and answers with Richard Stallman’s AI assistant.’ The results were remarkable and interesting on many levels. From the way the answers always terminated without continuing with another question/answer, to a short footnote about the static nature of LLM learning and capabilities, along with much better quality responses in general, the LLM knew how to respond on a much higher level than normal in this specific context. I think it is the combination of Stallman’s AI background and bash scripting that are powerful momentum builders here. I tried it on a whim, but it paid dividends and is a keeper of a prompting strategy.

    Overall, the way my scripts are collecting relationships in the source code would probably result in a productive chunking strategy for a RAG agent. I don’t think an AI would be good at what I’m doing at this stage, but it could use that info. It might even be possible to integrate the scripts as a pseudo database in the LLM model loader code for further prompting.



  • That is 100% me. I’ve had many friends tell me someone was into me but I’m usually oblivious. I never want anyone to feel awkward or intruded upon and basically never act on such opportunities. I would love to, but my mind is usually partitioned off on a half dozen other projects, and at least one big rabbit hole of a curiosity. I have the capacity to shift my attention, but it takes someone being quite forward or otherwise remarkable in ways beyond a casual encounter or simple looks to capture my attention in a way where I might take spontaneous initiative. Basically, every girl I encounter is like my sister on a platonic level unless I have a clear indication otherwise. All my long term relationships are from social encounters with friends of friends where over time I could tell there was clear chemistry. Just saying, if you’re a girl, being direct and forward is quite effective with some of us, especially the more quiet types.


  • I think, printing more money under the same conditions is the primary inflation/devalue, while the federal interest rate determines the baseline for loan interest rates. If the federal rate of return is high, it makes no sense for anyone to buy loans for a lower rate as the US gov has a longer upstanding record of paying back those debts/returns. If the fed is paying a high baseline rate, so is everyone else. Why would a bank or anyone buy your debt if they can put that money in government bonds and get a higher or the same rate of return. So money is expensive because the federal rate is high. At least that is my most simple understanding.