Most ARM-based Chromebooks usually have two or four Cortex-A7X processors, plus four Cortex-A5X processors, so your Chromebook probably has a better CPU than the i.MX 8M, but you actually don’t need that good of a CPU for most tasks. I expect that the Librem 5 will be good enough to do convergence for 50% of people. If you use your PC to read email, browse the web, watch low-resolution Youtube videos, and write letters in LibreOffice, then the Librem 5 will be adequate. Maybe I should have made that clear in my post.
I just plan on escaping android . I already have had linux running on laptops since 2009 so I dont need the librem 5 to be a computer. I am just super hyped to have an android free linux phone. I hope it can handle youtube.
And ill also use it for mobile mp3 player as well . Cant wait to see a final release candidate.
This is basically 90% of my workflow. I will say too that with time and performance tuning I think it’s potential will expand. But also this is kind of just a version 1 situation. I am hopeful that if there are future iterations the convergence aspect will get better and better.
Smartphones and SOCs have been powerful enough to run lightweight distros for a while, I’m looking at you RasberryPi . We have to keep expectations reasonable. If it takes a massive video card to run xyz game then we can’t reasonable expect it to run on a much smaller device. However, mobile browsers are just as capable as their desktop equivalents, as an example.
The Librem 5 has one big benefit over an Android. Linux is compiled not interpreted. So the Librem 5 has the potential to be faster than an interpreted language OS.
Doesn’t mean it will be faster, just the potential.
Although watching the videos Librem has been posting these last two weeks, the dev kit is running desktop applications and using their Libhandy software to scale with intriguing results. Sure they could be cherry picking the videos and only a full fledged device in our hands will ultimately prove how powerful the device is but I’m and genuinely optimistic this device will be quite capable as an everyday phone and a web browsing/word processing/music player/emailing replacement.
looking at the RaspberryPi 4b
The previous generations Raspberry Pi’s ran raspian just fine. Although the new Pi4 is incredible for the price
The Vivante GC7000Lite GPU in the i.MX 8M Quad is significantly better than the Broadcom VideoCore IV @ 250 MHz in the RasberryPi 3B+, but the CPU cores in the i.MX 8M Quad are 1.5 Ghz 4x Cortex A-53 vs 1.4 Ghz 4x Cortex A-53 in the RasberryPi 3B+, so almost exactly the same.
However, the CPU performance in the i.MX 8M Quad (which is the same as the RasberryPi 3B+) will be half the performance of the RasberryPi 4B with 1.5 GHz 4× Cortex-A72, according to this Tech Republic article.
It concludes: “The Raspberry Pi 3 B+ was a half-decent desktop PC, but for everyday use the Raspberry Pi 4 feels close to my work laptop — a machine costing around 20 times the price.”
Not exactly. Most Android apps are programmed in Java or Kotlin which is compiled to Java bytecode and then executed in the Android Runtime (ART) which uses ahead-of-time (AOT) compilation that converts the bytecode into native code. However, people who write processing intensive apps in Android usually write C or C++ code that uses the Native Development Kit (NDK) which bypasses ART and executes natively, instead of the Software Development Kit (SDK) which uses ART. Here are some tests to show the difference:
So most of the Java apps end up being executed as compiled native code in one way or another.
Benchmarks with today’s Java usually show it being 10%-20% slower than C. I haven’t seen any benchmarks of bytecode in ART vs C using GTK, but I would expect a GTK app written in C to be around 30% faster than a Java app in Android, because the GTK library is pretty efficient and pure C, but the Librem 5 also supports HTML5 apps, and they will be substantially slower than a Java app for Android.
The only benchmarks I can find are these which are very outdated, since they are based on Ubuntu Touch (which used Qt 3) vs Java bytecode in the Dalvik virtual machine.
However, when we get to games in Android which are programmed in C/C++ for the NDK, I doubt that you are going to see much difference with Linux games. I doubt there is much difference in web browsers either.
Thanks for confirming me with such a detailed post. Cheers.
I’d actually take the idea even a step further. Sooner or later I want an analog watch that carries a powerful computer. That’s all I want to carry around. Monitor, keyboard and mouse stay at the office and connect to my watch when I sit down at the office table.
(Note: I don’t want to fiddle around on the tiny smartwatch screen, hence a classic analog watch. If anything, then retina projections and a conversational interface.)
If you are using Wi-Fi or Bluetooth to send the signal to the monitor, your bandwidth is limited, so you either have a low-resolution screen or a low refresh rate. You also will need to charge your smartwatch frequently, because it is going to take a lot of energy to send a video and audio signal over Wi-Fi or Bluetooth. You will have to run the SoC at a low frequency, because there is very limited cooling capacity in a watch, and you don’t want a hot watch against your wrist. Looking at the Apple S2 SIP, you get a 780MHz dual-core CPU,
512MB RAM and 8GB of Flash memory. Perhaps with today’s best tech at 7nm, you can get to four Cortex-A57 cores at 900MHz, 1GB of RAM, 32GB of Flash memory. I suspect that you would have to throttle that down to 500MHz, because 900MHz is only for short bursts, but you need sustained operation if generating a desktop screen. The continuous Wi-Fi operation is going to kill your thermal envelope. You could check your email, but watching a full screen video in 1080p at 60fps probably isn’t possible.
Convergence can only go so far. The main point that might be limiting the whole IT field is the brick wall that is predicted to come up, around 2020, regarding the limits of scale of the circuits in the processor and all other nano to pico scale devices. The atomic scale circuits have too much cross talk between adjoining signals, to make that smaller scale workable. Something other, new will have to be found in all those inventions, that many are supposedly working on, to get past that brick wall of further scaling things down any further. Like adding more processors per device.It can be shown that Standard Quantum Mechanics has never been used to make any devices that work according to that particular theory, to answer this question. But that is another topic about physics as such.
i for one haven’t exhausted the current potential of the technology i’m currently ON so why should i care about how far convergeance CAN go? i’m quite certain that there are people in this world who get paid to HAVE concerns regarding this issue (though i don’t know of what their level of interest in ethics/philosophy is)
There’s a limit to how useful that is. If the application parallelizes well then go for it. Hexacore … Octacore … Decacore …
Otherwise it means something significantly new on the hardware front or significant software rework in order to make better use of parallelism, or both.
Maybe a hardware brick wall would be a good thing - get people thinking more about quality than quantity i.e. freedom rather than performance.
IMO, it will be VERY hard. Even SBC computers like Novena or PineBook have a tough time keeping them usable. The boards (not including battery and whatnot) are bigger than the Librem 5.
The web site does say that - but it doesn’t say how they know that.
One minor criticism would be that the weight of the Nexdock is about the same as the weight of a laptop with the same sized screen. So you could simply carry around a laptop, at least for some scenarios. The laptop wouldn’t give you the convergence as such but you could get a lot of the same experience.
Nope it probably won’t, for me.
The stuff I use my computers for, is often too resources hungry I think. If you mostly do mails, surf and write documents, your completely fine.
But at least 50% of the computer users would be fine with a RasPi, so for them it works as a one device to rule them all.
Already available: https://www.asus.com/us/Phone/ASUS_PadFone_X_US/ …not Linux of course
the next dock looks like a chrome-book … the rspi 4 has two m-hdmi ports so it’s a better alternative for most people needing two screens. also cooling potential is greater. also 4 pin power over ethernet easily accesible on the board.
to each their own. That is one beastly looking device IMO.
You’re probably right.
I would still want to own and carry a classic, analog watch (connected to my computing resources or main device) that helps me to communicate and stay connected. That one could host some storage to carry all my secrets (passwords, certificates, keys, OTP generator, etc.) and let me do communication by something else than a stupid, smallish watch screen. That’d be nice!