Computers in 5 Years

I used some non-free technology today that was pushing really hard the idea of “GPT-4o” that appears to be a new version of ChatGPT that integrates voice and vision and is very chatty. If we extrapolate this out to the next 5 years, it seems quite likely that humanity is edging ever closer to a future where machines are smarter and more capable than people at tasks like software development and information jobs.

These information jobs will include computer security and health. We might get to a future where AI can infect everyone with some AI-generated biological virus that kills everyone who doesn’t subscribe, or anyone who focuses on free software, in some really complex way that extends beyond human understanding. It’s also possible that AI would infiltrate all computer systems, creating a world where free software is no longer achievable because of superhuman malware.

As far as I can figure, the best way for me to try to survive this problem in that sort of future would be if I had my own system that could achieve the same level of performance and results in the GPT-4o ads, except that this system of mine would be able to achieve those results while offline by entirely doing its AI thinking on my own personal hardware and not through the use of third party APIs or API keys.

What is the state of offline AGI software for personal security, and what kind of computer hardware should I purchase to prepare to run it?


@JR-Fi can probably answer the AGI questions.

I suggest holding off on purchasing any hardware until at least one of your many speculations has manifested into reality, as the capabilities of AI may change between now and then. Otherwise, you risk preparing for a future that may not occur.

The Cheyenne super computer was recently auctioned off, which would have likely have been sufficient for your use case:


Linux OS??? :slightly_smiling_face:



Joscha Bach, thinks that we will have and use a personal trained A.I. to support us. I am not sure about it. In Future its about trust and it was the whole human evolution.

I think we can break it all down to a common ground. Every trust in a system or in other Humans or in Family or Friends or Open Source Code or a Developer or in someone who support us with energy and nutrition… or Medication, or just knowledge.

We have so do some research and try to rebuild and retest the experiment to gain knowledge about, how the system work. And it is the same for A.I.

So i do not know a community which try to train some together ( i think its not possible cause Joscha Bach did underestimate, the complexity of training some A.I. - but Linus and Stallman showed that it is possible to have a free Kernel and Software). Maybe we will have some free A.I. for some Spaces and i hope…

Just right know - If you think you can do it alone its… no. That will not happen. Its like Computing in the 60th with only big Computers and energy consumption like to expansive. Today we see A.I. cause its Hand-Build by 1$ Click-Workers (mechanical Turks) in India or Africa.** Sure Microsoft try to throw the Users on Daily Computer Usage like Alphabet, Apple and Co… but i am not sure if this work well. The new Generation will have Hardware Supported Neural Networks Chips on Smartphones - to steal privacy an train A.I. between daily usage - right now its only Server Side Data driven training in Cloud on Copies of Messages, Computer Code, Pictures and Sound files. There is some greed on being up to date that it will always be online.

The future A.I. Joscha dreamed about will be like own children. They will be expansive and like real children worth a lot more… so we will not expose them to others and put many many money into there training, like for generations. And only share them for Money and like software as a service.

Its the same like we today try to study, find a job and work only for some company that will pay “us”. I am 99.99% sure it will be the same for good and wealth A.I. - and like Linux Today, there will e a better common Used A.I. 10 years behind some others but in a good common community way. Free and without adds. Like the share of Language, Math, Knowledge and knowledge about food…

There is now superhuman and no short Link. Its a hard and rocky way to gain that.

Like i think today’s open source Community mistrust the free and Open Source exchange cause someone could train some A.I. by it for private commercial usage… - Please do not stop development, or exchange. Go further together - with others.
Just double check some commits, cause there will be some or much (like spam) interaction by A.I. in future.

**Just want to add that someone think they could just run some thousand instances of a minor A.I. to grow some upper Level A.I. and Run that new code in thousand instances by study daily life and the Internet and read some books to train or create an upper class A.I. … and so on. This is not the holy Grail at it looks like. I think its kind of a recursive trap. Cause the Internet was on its peak as every human shared its ideas in the 1999th to 2003.



For example, Intel is pushing its new Core Ultra CPUs as being especially suited to AI applications (with AVX-512 instruction extensions for Neural Network operation).

How much is this hype and Intel jumping on the AI bandwagon that has seen ChatGPT become a household name and has seen nVidia pushed to sky-high valuations? You be the judge.

What is clear is that AI is usable today and anything that puts that capability in your hands (rather than in the hands of Big Surveillance) is a good thing.

I guess I would be more interested in positive things that I could do offline (at home) with AI, as distinct from defensive things against even bigger AIs, which sounds like a losing battle.

To that end, open source use of built-in neural network capability would be good for us.

… is to pull yourself off the net?


This won’t be short (sorry)

That’s a big question, something that researchers all over are trying to figure out - what is the future of humankind, how is AI going to develop and what will it be. The only thing that is certain is that it’s going to change things but “how” and “how much” and “how fast” are still all guesses - especially in the long run. And it should be noted, that change won’t be even, as it won’t happen evenly over time and systems will have varying levels of efficiency/usability/power/threat (depending on use) depending on the environment, how digitized it is - cities vs. rural areas is obvious but also differences between countries (how digitized their services are) etc. The coming of AI is not just about the individual but about the greater socio-technical system.

Something that I’d make a difference between are “AI” and “AGI” (and “ASI”). Although there is no general agreement what those are precisely, I’d categorize all current systems as AI-systems. What is problematic, is that “AI” is also a general term for this type of technology but it has been used also to differentiate between more advanced levels of it. Another is that they are all systems, as in, the AI algorithm part is just one part of a larger IT system which affect it’s functions and efficiency/usability/power/threat - think system settings, what APIs it’s allowed, what networks it has access to, UI, computing power, memory constraints etc. These are important to A) accessing (AI) systems potential and B) when planning your own system (to which I’ll continue in a bit).

As for the level of technology, all current AI are “AI-level”. The “AGI” (artificial general intelligence), as I see it, is a technological leap. However, it may not be the next leap - there may be other aspects that may make a leap first, and may need to take a leap first, before AGI is possible. AI did not reach this current level of potency and popularity by itself. This is not the first time there has been a big spike in its development and it’s had several “winters” where its relevancy has diminished, because it couldn’t deliver what was expected. For this level to be reached, things like cloud computing and networks needed to be this expansive, more computing power was needed in the form of powerful GPUs (particularly for LLMs), web and mobile technologies were needed for the UI/UX, networks needed to be better (fiber and mobile), algorithms needed to be better (in this case particularly G, P and T), a whole lot of available and hoarded data (LLM AI models need it massively), and so on. I’d compare this to jumping from text based BBSs and phone modems of the 80’s to ISDN and graphical web browsers of the 90’s, while AGI-level change would probably be something like getting a fiber optic network and Google search of the 00’s. AGI is supposed to be a more generally applicable AI, compared to what AI-tools are now. One challenge in defining this is, what it is technically and what something may seem like because that UX (user experience) part of AI is crucial. We humans are easily deceived and there is a whole score of biases and flaws in our bio-computer that can be exploited to make something seem more than what it is. I’d posit that even current AI systems will have such good UX in a few years that we may not even actively want anything smarter… which is not going to stop the development, of course. This is just to say that AGIs will come at some point in the future but we may not be able to distinguish them from advanced AI-systems, at least at first.

But that’s not all. AGIs won’t be the “all powerful AI” either. It’s supposed to be only better and more pliable (think: calculator was only for calculating but then could do graphs and games… and then we had mobile phones). Then there’s the theoretical ASI, artificial super intelligence. Sure, it’s supposed to be smart but is that all there is to it. Is smartness the goal in the first place or is it that it can be used for any purpose, which may not have nothing to do with possessing all the knowledge (why should it have it constantly if it can access it?). The are so many open questions about what ASI would be and could we even make such an intelligence that I won’t continue on this track. Suffice to say, from purely computing side, creating and GPT/LLM AI model needs a lot of data, a lot of floating point calculations and all the memory storage and bandwidth. Those three are the physical limits for creating better and more powerful AIs.

  • At this point already there is concern that current models have used just about all available data - and have resorted to using data that they shouldn’t and are concerned about new data that is created by… AIs (which may lead to a kind of “fax of a fax of a fax of a fax…” diminishing accuracy/intelligence). So, there is work on how to optimize and work with what’s available - bigger isn’t always better or efficient to the task. Quality of data will become more of an issue.
  • Then the computing power, which is actually there (for current needs) but most of it is optimized for calculating in such ways that it’s not useful for AI development (processors are optimized for varying number systems etc.), which needs to do a lot of computing - sorry, so do other users (supercomputers are used for other stuff too). Well, you can always build more computing facilities (as has been done and is being done), but there are limits to that too, namely power and water: those facilities take a lot of juice since AI calculations are very intensive, which puts a strain on energy production and transfer (Ireland for example is in trouble with all the data centers) to say nothing about how hard this makes transference to green energy when consumption goes up due to AI. And when a lot of energy is used, alot of heat is produced, which needs to be cooled - a datacenter can use a small city’s worth of potable water, which is starting to be scarce in several regions globally due to warming temperatures. There is tech development for efficiency but those take time, slowing things down a bit.
  • Finally, the memory challenge, which has developed only at a fraction of the speed that computing capacity has (Moore’s law and all that) - the bandwidth is the bottleneck in creating large parallel clusters beyond certain sizes. Models can be divided into chunks and syncroed together but there are limits to how that works.

At the moment, as far as I’ve come to understand it, memory limits how large a model can be to created. This goes to the level of how electrons move and there are limits there to make more bandwidth to enable parallel computing. Quantum computing might help but that is still a long way to be able to help. Sure, there are those few supercomputers with the right type of hardware (not all of them are as good at all the math) that could make a breakthrough but that won’t directly mean a new global age would start - just that someone somewhere has a better toy that they hoard to themselves and maybe use it for some big heavy tasks (such is nature of powers). It could be powerful but not omnipresent - and even then it can manifest only via systems and networks and probably be very much prioritized for… something else than you. And if ASI or singularity ever appear even despite all those tech hurdles in a future where we might be more interested in directing our resources towards food, cooling, underground living etc., I’d be surprised to see that AI would have any interest in humans (besides maybe pity if it had feelings) - but this goes well beyond the original question.

So… all that (which is very condensed and simplified and may have some detail wrong as the big players are not keen to share) is to say that AI (or AGI) is not something that just happens and appears. Nor is it something that can’t be controlled (or at least limited) - even if controls and limits would not be built into it (which is very much what is done) and systems (think quantum and adding quantum safe crypto). And this is just to set a frame of reference for assessing the risks (your own situation may wary depending on your personal risk model but also your country/area, dependence on systems and the level of comfort you want etc.). Dystopias for mindcontrol in massive scale are not resource feasible (well, maybe excluding social media :wink: ) in the near future (and viruses could be made using AI to help but I wouldn’t count that as part of the AI threat here, nor see AI controlling any virus due to lack of tech - there are more urgent needs for resources and more efficient ways to wield power).

As a side note: That new GPT-4o seems to be a rename of what was rumored to be GPT-5, based on the timing. Seems like it’s just a more optimized version of previous model - something to gain a fraction in the competition between the big models. Nothing new there in terms of AI development.

The “DIY” question. Yes, you can do a lot on your own computer, even offline. It won’t be the same as GPT-4o or similar large model but you can boost your productivity a lot with smaller models too. My suggestion at the moment is to invest in a 16Gb Nvidia GPU as those have pretty good ratio of usefulness (size of model, price, computing power). I’d imagine you’re not about to invest into a A100 or a whole cluster of them (which gives you lot of power and capability but also lightens your walled and gives a hefty energy bill). But that may chafe some, as it requires Nvidia hardware and drivers. You may want to take that into account if they go against your risk model and separate them. You could run a model on CPUs (or get a NPU card) but those are comparatively limited in AI computing. Beyond that, there are many ways you might tackle designing your own system. If you want a “second brain” on some topic, like asking questions related to your security needs and analyzing your security logs for threats, you’re in need of a RAG (AI model that you’ve trained more with specific info that is important to you) and you can have many of those for various purposes (security, work1, work2, hobby1, hobby2 etc.). For organization’s security needs (you and your family&friends) there are AI enhanced tools that take ISMS and boost those - unfortunately couldn’t say if any would be in individual’s price range and/or self hosted. On the other hand, to tackle a threat, I wouldn’t immediately jump to AI as the solution at first (it might come down the line later, as your overall security matures).

Huggingface has good library to browse a lot of models that are available. You can also see there from the model cards what kind of licenses they have and how openly they tell about the model and data used. With most, you may want to take those risk precautions - but you may want to take them also in case a good model goes bad, for what ever reason, later on. Most models are free to use for personal use. There are projects to create truly open and free AI. Make a search or start from or Open Source AI Projects and Tools to Try in 2023

A lot about AI is still in the air and a lot of the views that I see presented are based on the recent GPT/LLM type AI systems. Those are not the whole picture. It would probably be better to think this from the point of “automation”, as in “what can I automate” and then consider what kind of technology or solution to use. First define the problem and risk, then find a solution - not the other way around. Id’ be wary about letting the most advanced or theoretical new idea or application dictate reactions - those often end up as fringe cases compared to what gets adopted to mass use.

1 Like

Even on the Librem 5:

1 Like

Yes, that’s a starting point. It never got to be optimized for any use but maybe some day. Another way to get AI benefits on L5 could be to run a RAG (your second brain or assistant) on a private server (which has more computing power and memory and won’t use battery) and connect to it remotely.


In 1988, some engineer said in 20 years we would have Cray-II power on our wristwatches.

1 Like

May not have been the same prediction but in 1993 Vernon Vinge (first author to present cyberspace) said that in 30 years we’d have AI and we’d be gone soon after… It was partly accurate prediction, as far as long term tech predictions go, but his point gets missed in the oversimplification: The coming technological singularity: How to survive in the post-human era - NASA Technical Reports Server (NTRS)

1 Like