This won’t be short (sorry)
That’s a big question, something that researchers all over are trying to figure out - what is the future of humankind, how is AI going to develop and what will it be. The only thing that is certain is that it’s going to change things but “how” and “how much” and “how fast” are still all guesses - especially in the long run. And it should be noted, that change won’t be even, as it won’t happen evenly over time and systems will have varying levels of efficiency/usability/power/threat (depending on use) depending on the environment, how digitized it is - cities vs. rural areas is obvious but also differences between countries (how digitized their services are) etc. The coming of AI is not just about the individual but about the greater socio-technical system.
Something that I’d make a difference between are “AI” and “AGI” (and “ASI”). Although there is no general agreement what those are precisely, I’d categorize all current systems as AI-systems. What is problematic, is that “AI” is also a general term for this type of technology but it has been used also to differentiate between more advanced levels of it. Another is that they are all systems, as in, the AI algorithm part is just one part of a larger IT system which affect it’s functions and efficiency/usability/power/threat - think system settings, what APIs it’s allowed, what networks it has access to, UI, computing power, memory constraints etc. These are important to A) accessing (AI) systems potential and B) when planning your own system (to which I’ll continue in a bit).
As for the level of technology, all current AI are “AI-level”. The “AGI” (artificial general intelligence), as I see it, is a technological leap. However, it may not be the next leap - there may be other aspects that may make a leap first, and may need to take a leap first, before AGI is possible. AI did not reach this current level of potency and popularity by itself. This is not the first time there has been a big spike in its development and it’s had several “winters” where its relevancy has diminished, because it couldn’t deliver what was expected. For this level to be reached, things like cloud computing and networks needed to be this expansive, more computing power was needed in the form of powerful GPUs (particularly for LLMs), web and mobile technologies were needed for the UI/UX, networks needed to be better (fiber and mobile), algorithms needed to be better (in this case particularly G, P and T), a whole lot of available and hoarded data (LLM AI models need it massively), and so on. I’d compare this to jumping from text based BBSs and phone modems of the 80’s to ISDN and graphical web browsers of the 90’s, while AGI-level change would probably be something like getting a fiber optic network and Google search of the 00’s. AGI is supposed to be a more generally applicable AI, compared to what AI-tools are now. One challenge in defining this is, what it is technically and what something may seem like because that UX (user experience) part of AI is crucial. We humans are easily deceived and there is a whole score of biases and flaws in our bio-computer that can be exploited to make something seem more than what it is. I’d posit that even current AI systems will have such good UX in a few years that we may not even actively want anything smarter… which is not going to stop the development, of course. This is just to say that AGIs will come at some point in the future but we may not be able to distinguish them from advanced AI-systems, at least at first.
But that’s not all. AGIs won’t be the “all powerful AI” either. It’s supposed to be only better and more pliable (think: calculator was only for calculating but then could do graphs and games… and then we had mobile phones). Then there’s the theoretical ASI, artificial super intelligence. Sure, it’s supposed to be smart but is that all there is to it. Is smartness the goal in the first place or is it that it can be used for any purpose, which may not have nothing to do with possessing all the knowledge (why should it have it constantly if it can access it?). The are so many open questions about what ASI would be and could we even make such an intelligence that I won’t continue on this track. Suffice to say, from purely computing side, creating and GPT/LLM AI model needs a lot of data, a lot of floating point calculations and all the memory storage and bandwidth. Those three are the physical limits for creating better and more powerful AIs.
- At this point already there is concern that current models have used just about all available data - and have resorted to using data that they shouldn’t and are concerned about new data that is created by… AIs (which may lead to a kind of “fax of a fax of a fax of a fax…” diminishing accuracy/intelligence). So, there is work on how to optimize and work with what’s available - bigger isn’t always better or efficient to the task. Quality of data will become more of an issue.
- Then the computing power, which is actually there (for current needs) but most of it is optimized for calculating in such ways that it’s not useful for AI development (processors are optimized for varying number systems etc.), which needs to do a lot of computing - sorry, so do other users (supercomputers are used for other stuff too). Well, you can always build more computing facilities (as has been done and is being done), but there are limits to that too, namely power and water: those facilities take a lot of juice since AI calculations are very intensive, which puts a strain on energy production and transfer (Ireland for example is in trouble with all the data centers) to say nothing about how hard this makes transference to green energy when consumption goes up due to AI. And when a lot of energy is used, alot of heat is produced, which needs to be cooled - a datacenter can use a small city’s worth of potable water, which is starting to be scarce in several regions globally due to warming temperatures. There is tech development for efficiency but those take time, slowing things down a bit.
- Finally, the memory challenge, which has developed only at a fraction of the speed that computing capacity has (Moore’s law and all that) - the bandwidth is the bottleneck in creating large parallel clusters beyond certain sizes. Models can be divided into chunks and syncroed together but there are limits to how that works.
At the moment, as far as I’ve come to understand it, memory limits how large a model can be to created. This goes to the level of how electrons move and there are limits there to make more bandwidth to enable parallel computing. Quantum computing might help but that is still a long way to be able to help. Sure, there are those few supercomputers with the right type of hardware (not all of them are as good at all the math) that could make a breakthrough but that won’t directly mean a new global age would start - just that someone somewhere has a better toy that they hoard to themselves and maybe use it for some big heavy tasks (such is nature of powers). It could be powerful but not omnipresent - and even then it can manifest only via systems and networks and probably be very much prioritized for… something else than you. And if ASI or singularity ever appear even despite all those tech hurdles in a future where we might be more interested in directing our resources towards food, cooling, underground living etc., I’d be surprised to see that AI would have any interest in humans (besides maybe pity if it had feelings) - but this goes well beyond the original question.
So… all that (which is very condensed and simplified and may have some detail wrong as the big players are not keen to share) is to say that AI (or AGI) is not something that just happens and appears. Nor is it something that can’t be controlled (or at least limited) - even if controls and limits would not be built into it (which is very much what is done) and systems (think quantum and adding quantum safe crypto). And this is just to set a frame of reference for assessing the risks (your own situation may wary depending on your personal risk model but also your country/area, dependence on systems and the level of comfort you want etc.). Dystopias for mindcontrol in massive scale are not resource feasible (well, maybe excluding social media ) in the near future (and viruses could be made using AI to help but I wouldn’t count that as part of the AI threat here, nor see AI controlling any virus due to lack of tech - there are more urgent needs for resources and more efficient ways to wield power).
As a side note: That new GPT-4o seems to be a rename of what was rumored to be GPT-5, based on the timing. Seems like it’s just a more optimized version of previous model - something to gain a fraction in the competition between the big models. Nothing new there in terms of AI development.
The “DIY” question. Yes, you can do a lot on your own computer, even offline. It won’t be the same as GPT-4o or similar large model but you can boost your productivity a lot with smaller models too. My suggestion at the moment is to invest in a 16Gb Nvidia GPU as those have pretty good ratio of usefulness (size of model, price, computing power). I’d imagine you’re not about to invest into a A100 or a whole cluster of them (which gives you lot of power and capability but also lightens your walled and gives a hefty energy bill). But that may chafe some, as it requires Nvidia hardware and drivers. You may want to take that into account if they go against your risk model and separate them. You could run a model on CPUs (or get a NPU card) but those are comparatively limited in AI computing. Beyond that, there are many ways you might tackle designing your own system. If you want a “second brain” on some topic, like asking questions related to your security needs and analyzing your security logs for threats, you’re in need of a RAG (AI model that you’ve trained more with specific info that is important to you) and you can have many of those for various purposes (security, work1, work2, hobby1, hobby2 etc.). For organization’s security needs (you and your family&friends) there are AI enhanced tools that take ISMS and boost those - unfortunately couldn’t say if any would be in individual’s price range and/or self hosted. On the other hand, to tackle a threat, I wouldn’t immediately jump to AI as the solution at first (it might come down the line later, as your overall security matures).
Huggingface has good library to browse a lot of models that are available. You can also see there from the model cards what kind of licenses they have and how openly they tell about the model and data used. With most, you may want to take those risk precautions - but you may want to take them also in case a good model goes bad, for what ever reason, later on. Most models are free to use for personal use. There are projects to create truly open and free AI. Make a search or start from https://opensource.org/ or Open Source AI Projects and Tools to Try in 2023
A lot about AI is still in the air and a lot of the views that I see presented are based on the recent GPT/LLM type AI systems. Those are not the whole picture. It would probably be better to think this from the point of “automation”, as in “what can I automate” and then consider what kind of technology or solution to use. First define the problem and risk, then find a solution - not the other way around. Id’ be wary about letting the most advanced or theoretical new idea or application dictate reactions - those often end up as fringe cases compared to what gets adopted to mass use.