I was going to do an origin character as a solo play-through and a custom character for a group play-through with my mates, but now I might do it the other way around… which means hours in the character creator! Ha.
I was going to do an origin character as a solo play-through and a custom character for a group play-through with my mates, but now I might do it the other way around… which means hours in the character creator! Ha.
Often the question marked as a duplicate isn’t a duplicate, just the person marking it as such didn’t spend the time to properly understand the question and realise how it differs. I also see lots of answers to questions mis-understanding the question or trying to force the person asking down their own particular preference, and get tons of votes whilst doing it.
Don’t get me wrong, some questions are definitely useful - and some go above-and-beyond - but on average the quality isn’t great these days and hasn’t been for a while.
Google’s first quarter 2023 report shows they made massive profits off vast revenue due to advertising.
It is about control though. The thing that caught my eye is that they’re saying that only “approved” browsers will be able to access these WEI sites. So what does that mean for crawlers/scrapers? That the big tech companies on the approval board will be able to lock potential competitors out of accessing the web - new browsers, search engines, etc. but much more importantly… Machine Learning.
Google’s biggest fear right now is that ML systems will completely eliminate most people’s reason to use Google’s search, and therefore their main source of revenue will plummet. And they’re right to be scared, it’s already starting to happen and it’s showing us very quickly just how bad Google’s search results are.
So this seems to me like an attempt to control things from that side. It’s essentially the “big boys” trying to consolidate and firm-up their hold in the industry and not let newcomers rival them, as with ML the barrier to entry has never been lower.
Red Hat saying that argument in-particular shows they’ve pivoted their philosophy significantly, it’s a seemingly subtle change but is huge - presumably due to the IBM acquisition, but maybe due to the pressures in the market right now.
It’s the classic argument against FOSS, which Red Hat themselves have argued against for decades and as an organisation proved that you can build a viable business on the back of FOSS whilst also contributing to it, and that there was indirect value in having others use your work. Only time will tell, but the stage is set for Red Hat to cultivate a different relationship with FOSS and move more into proprietary code.
Don’t roll your own if you can help it, just use a distribution dedicated for use as a thin client. I was co-incidentally just looking into this last week and came across ThinStation which looks really good. There are other distro’s too, search for “linux thin client”.
How do Linux distro’s deal with this? I feel like however that’s done, I’d like node packages to work in a similar way - “package distro’s”. You could have rolling-release, long-term service w/security patches, an application and verification process for being included in a distro, etc.
It wouldn’t eliminate all problems, of course, but could help with several methods of attack, and also help focus communities and reduce duplication of effort.
I personally found Fedora to be rock solid, and along with Ubuntu provided the best hardware support out of the box on all my computers - though it’s been a couple of years since I used it. I did end up on Ubuntu non-LTS in the end as I now run Ubuntu LTS on my servers and find having the same systems to be beneficial (from a knowledge perspective).
If I’m okay with the software (not just trying it out) am I missing out by not using dockers?
No, I think in your use case you’re good. A lot of the key features of containers, such as immutability, reproduceability, scaling, portability, etc. don’t really apply to your use case.
If you reach a point where you find you want a stand-alone linux server, or an auto-reconfiguring reverse proxy to map domains to your services, or something like that - then it starts to have some additional benefit and I’d recommend it.
In fact, using native builds of this software on Windows is probably much more performant.
Containers can be based on operating systems that are different to your computer.
Containers utilise the host’s kernel - which is why there needs to be some hoops to run Linux container on Windows (VM/WSL).
That’s one of the most key differences between VMs and containers. VMs virtualise all the hardware, so you can have a totally different guest and host operating systems; whereas because a container is using the host kernel, it must use the same kind of operating system and accesses the host’s hardware through the kernel.
The big advantage of that approach, over VMs, is that containers are much more lightweight and performant because they don’t have a virtual kernel/hardware/etc. I find its best to think of them as a process wrapper, kind of like chroot for a specific application - you’re just giving the application you’re running a box to run in - but the host OS is still doing the heavy lifting.
I’ve always loved the fighters (Starfury?) from Babylon 5:
I used to spend a lot of time modelling them in 3D software, just because! I love the aesthetic, how they have a hint of modern military (the cockpit is Apache helicopter-like), the way they’re held then launched almost like missiles, and how agile their design inherently is. Plus, the design obviously has some nods to the ship in The Last Starfighter and of course the X-Wing from Star Wars - both of which are cool ships.
In the game Elite Dangerous, there’s a ship called a Vulture that has similar elements (cockpit, size-ish, agility) and in VR is the closest I’ve come to feeling like I’m flying one!
As always, it depends! I’m a big fan of “the right tool for the job” and I work in many languages/platforms as the need arises.
But for my “default” where I’m building up the largest codebase, I’ve gone for the following:
I was using file merging, but one issue I found was that arrays don’t get merged - and since switching to use Traefik (which is great) there are a lot of arrays in the config! And I’ve since started using labels for my own tooling too.
I was recently helping someone working on a mini-project to do a bit of parsing of docker compose files, when I discovered that the docker compose spec is published as JSON Schema here.
I converted that into TypeScript types using JSON Schema to TypeScript. So I can create docker compose config in code and then just export it as yaml - I have a build/deploy script that does this at the end.
But now the great thing is that I can export/import that config, share it between projects, extend configs, mix-in, and so on. I’ve just started doing it and it’s been really nice so far, when I get a chance and it’s stabilised a bit I’m going to tidy it up and share it. But there’s not much I’ve added beyond the above at the moment (just some bits to mix-in arrays, which was what set me off on this whole thing!)
I hear they have improved performance now though
It’s still not great. Better, but still slow enough to make you question whether you’ve actually launched the app or not.
I did start with it and use it on a laptop, honestly I think that’s where it shines the most - but I guess the more windows you open the less useful it becomes. I think if there was a way to do the expose-like “view all things at once” (Super key) that worked across all workspaces, I’d be all over them. But as there’s no easy way to live view everything on all workspaces, I just don’t use them.
Yes, I love it! Really it’s the MacOS-like “Expose” feature that I find to be essential.
I would advise against using workspaces though, I find those actually sort of go against the core idea of it IMO. There are a few things I’d really like added to it, but for the most-part when you get into it it’s great.
My main desktop I have 4 monitors (I know, but once you start a monitor habit it’s really hard to not push it to the limit - this is only the beginning!) It roughly breaks down into:
The key, literally, is you just press the Super key and boom, you can see everything and if you want to interact with something it’s all available in just one click or a few of key presses away.
On my laptop with just one screen, I find it equally invaluable, and is actually where I started to use it the most - once again, just one press of Super and I can see all the applications I have open and quickly select one or launch something.
It’s replaced Alt + Tab for me - and I know they’ve made that better, and added Super + Tab, but none of them are as good as just pressing Super.
The things I’d really love added to it are:
I just have a static page that I randomly change - you can see mine here. In this case I was testing the idea of having text within an SVG for better scaling from mobile to desktop, and also I’m loving orange and purple at the moment for some reason! Oh, and I was testing automated deployments from CI/CD, so I always use my own base domain with those first tests!
With regards to education, one of the things I’ve come to understand goes entirely counter to the way I was taught at University - for me, programming is a creative activity. It’s an iterative process, and the less constraints I have on how I achieve something, not what I achieve, the better I enjoy it, the more productive I am, and the better by many measures the end solution will be.
I think that is a key part of what’s missing from CS education, to understand that and lean into it to both increase engagement but also to get people thinking outside the box for solutions to their problems. Students seem to be taught so much, but very little about “Here’s a high-level problem, provide a solution” which is the “core loop” of software development (outside of being a code monkey implementing other people’s designs). You go over requirements and specifications, but you don’t actually DO it… you don’t speak to people, ask the questions, realise they’d don’t know much about software, then later go “Oh shit, I made this assumption and made the wrong thing!”
One of the things that I used to like more than anything was achieving things even though there were constraints. For example, back in the 90’s even before even AJAX was a thing, I created a site for a betting company that was a SPA and pulled in data and live betting odds. I did this by having a message queue in JavaScript, a hidden frame from which to send messages from the queue to the server using a form, and then the server returned JavaScript code which executed and put the data where needed and updated the page. I absolutely loved that project, and most people on the team just couldn’t believe it was even possible.
But I didn’t solve it through engineering, I solved it through playing - trying things, seeing what would work/what didn’t, adapting the idea, etc. until I found something that worked - and it was based on some of the things I’d been messing about with in my own time (somewhat bizarrely, creating a sort of online aquarium of Dr. Seuss fish where each one was a person viewing the site!)
I think if we can inject more of the creativity, tinkering, iterative, playful side into our education it’ll make a huge difference.
I left University in the late 90’s and got my first job based on the things I’d been messing about with in my spare time with the University’s facilities/at home (Unix, Internet protocols, client/server arch, distributed computing, etc.) rather than anything I’d been taught. I learnt more in my first 3 months in work than 3 years of education.
Then the dot-com boom hit, and the number of applicants for any position surged - everyone was going into software development for the money. The whole team became involved in selecting candidates and being part of the interviewing process - it was a nightmare trying to give every person a fair chance. We had some good hires and some bad hires, but the bad hires became such a problem because we had to go through the recruitment mill again.
But we realised that the number one factor for whether they’d be a good hire or not was not education, but their own personal projects. That’s what mattered. Doing this for fun was the key indicator of being good, and became the ONLY thing we looked for on CVs in the first pass. Doesn’t matter if you have a 1st from Cambridge, if you don’t demonstrate you have a passion for the subject, you don’t get an interview. It was a huge success, and we built an amazing team and saved ourselves a ton of time during recruitment.
Those people still exist though, I see it all the time! But I think now that the “industry” has grown so much that in any given field there are less people (relatively) being attracted to it. For example, I can see that while back in the 80’s I was drawn to the personal computer, then the 90’s the internet - those things are staples of everyday life now. But I can see more modern young people being attracted to things like AI, drones, quantum computing, 3D printing, and so on as well.
Definitely give Ruthless a go, I love it… reminds me of early game ARPG’s on higher difficulties. Positioning really matters, you have to adapt based on what you get. It seems to have been the proving ground for PoE2’s new tempo.