You can use Linux perfectly without containers. Whenever I need a new program the first thing I do is look into the the repositories of the classical package managers of the installed distro like apt. When I don’t find a software there I look out on the internet and may have to install a software manually. Which most often simply means unpacking, rarely running a shell script and installing dependencies (packages or other stuff a program needs to work. So I do manually what normally the packet manager does for me.
One reason not to install a program from the repos is if I need a newer version, which is rarely the case. Or I need different versions in parallel. Depending on their release model some distro’s packages are more up-to-date than others.
So we are still at ordinary programs which run as processes and have a certain degree of separation. Each process has its own memory but they share the filesystem as an example.
With a VM you have a virtual computer with OS and programs. You can start multiple in parallel and have a great degree of separation. A malicious program would have to break out of the guest OS and the VM to get to the host OS. But VMs have a relatively big footprint, which might considered to big for separating small programs which aren’t security sensitive.
There is something in between and that the OS-level virtualization alias containers. Be aware that “container” is a pretty general term for something that contains something and is used in many contexts in IT and out of.
Containers do not need to run a guest OS and still provide a certain degree of separation between programs (which depends on the chosen container). Usually each instance has it’s own separated view of the filesystem and maybe other resources. Containers can be run many times in parallel, can be starter and stopped quicker and have a lower footprint coompared to VMs.
Also container are convenient for developers because everything what a program needs e.g. libraries, databases etc. can be included in the so called container image. So a container and its developer doesn’t rely so much on the OS package management e.g. to provide the dependencies in the needed version. A developer can provide a new version quicker without having to wait for the OS to provide updated dependencies. The dev can simply provide everything up-to-date in an updated version of the container image.
Unfortunately that is not only a pro but a con and point of critic at the same time, because it would be naive to believe that every container will be updated with security fixes. Some may still be carrying vulnerabilities around while the OS packages already have been updated.
Also every container might include e.g. library L and so there are many copies of the same thing occupying storage and memory and burden the system. Some containers can share common stuff to lower this con.
VMs and containers can both be useful for simple one user desktop systems up to huge cloud computing infrastructures.
The question is: what tasks do you want to separate and why?