Cool monitor. I’m planning on using it the same way as you for the most part. But also gaming.
I’ve been thinking for many years that this will be the future.
I don’t think we’ll be quite there yet.
I mean, up to 2TB of storage is pretty impressive, but for the occasional higher workload the CPU is obviously not powerful enough - even though it remains to be seen, where exactly this “higher workload” begins.
Maybe it’s good enough for some basic GIMP stuff, we’ll see.
However, you just made me think again…
So far, I merely thought about “playing” a bit with the convergence feature, but not seriously use it, except maybe in rare occasions at a friends house or something like that.
But I just realized, that for the stuff I do most (browsing, checking mails), attaching it to my 24" is at least as good as firing up my Librem 13, so we will see
I also think, Purism’s approach to convergence is the most viable that has been presented so far. (Those that I seem to remember either had “just” the mobile apps on the desktop or even two separated ecosystems.
Having the wealth of Linux desktop apps available, some of which adapt to mobile, plus some apps targeting mobile use while still useful in desktop mode, is a perfect approach in my opinion.
Maybe we have to wait some iterations until that works smoothly, but the direction is excellent
I don’t think the phone would be powerful enough for a while (never say never) to become a desktop in many scenarios.
I have thought it’d be cool to have the phone be a tablet. Maybe a phone to tablet dock. The travel monitor with touch screen and dock???
Or a tablet as dumb terminal remote driven from phone or desktop.
Fun to imagine scenarios.
In my head the problems won’t get fixed until I break it a whole bunch and submit bug reports.
I plan to try out the phone as a light-weight desktop. Basic word-processing/email/browsing type stuff.
I won’t be getting rid of my laptop any time soon though
A lot of it depends on what you want your PC to be able to do. If all you want in your PC is to be able to browse the web and edit some documents, then the Librem 5 might be good enough.
One of the big questions is how much RAM the Librem 5 will have. Purism has committed to a minimum of 3 GB of RAM, which is certainly enough to run the Linux/Wayland/GTK+/Phosh software stack on a phone, but 3 GB won’t allow you to do much multitasking if using the Librem 5 as a PC. If you like to open 20 tabs in your web browser, then you will be frustrated with 3 GB of RAM. If Purism decides to give us 4 or 6 GB, then the Librem 5 will be much better as a PC.
The i.MX 8M Quad doesn’t have a powerful CPU (1.5GHz 4xCortex-A53). The Cortex-A53 doesn’t support out-of-order execution like the Cortex-A7X series and it only has a 1MB L2 shared cache. It isn’t really designed to be a CPU for a PC. It is fine for light tasks which is what most people do with their PCs, but you definitely wouldn’t want to do anything that is calculation intensive like edit a big spreadsheet with the Librem 5.
The Vivante GC7000Lite GPU on the i.MX 8M has decent AnTuTu UX and 3D scores, so you can play 2D and 3D games at low resolutions (see Quake II on the Librem 5 Dev Kit), but we don’t know what kind of cooling the Librem 5 will have. If the Librem 5 uses a standard graphite sheet like most phones for its passive cooling, then you are probably going to get throttling. If it adds more expensive cooling like heat pipes or a copper sheet, then you might not get the throttling.
Another issue is that the Etnaviv driver only supports OpenGL ES 2.0, so you will have to install the proprietary Linux driver if you want to play many of the recent games that use OpenGL ES 3.0/3.1 or Vulkan. I wouldn’t count on much gaming on the Librem 5.
A big question is whether we will be able to use FOSS drivers for the Hantro G1/G2 video decoders. The Linux kernel recently added support for MPEG-2 and H.264 in the Hantro G1, but there is still no support for MPEG-4, VP9 and H.265. I have seen commentary that we should get FOSS drivers for the other codecs, but for now we have to use software decoders, which means that you might get overheating and throttling issues when watching a long video, and you will certainly get very poor battery life. I assume that people who want hardware video decoding will install the proprietary Linux drivers for the i.MX 8M.
Another question is what kind of external screen the Librem 5 will support. We know that you can only have one external monitor because HDMI alt mode and DisplayPort alt mode over USB 3 don’t support dual display mode and the i.MX 8M doesn’t support it either. The i.MX 8M supports 4K at 60fps with both HDMI and DisplayPort, but the HDMI is only available using a binary blob in the i.MX 8M. Unless Purism adds a separate chip to convert from a MIPI-DSI display to HDMI, we are only going to have DisplayPort. (Purism will have to add a USB-C host/Power Delivery chip that can convert from DisplayPort to DisplayPort alt mode, since that is also not supported by the i.MX 8M.) Just because the i.MX 8M supports 4K, it doesn’t mean that the Librem 5 will support it, but even if it is supported, you are probably going to want to install the proprietary drivers if you want 4K video and forget any high-resolution gaming.
The final question is what Wi-Fi/Bluetooth chip will Purism use, which will be needed for the Bluetooth keyboard and mouse. Purism was working with Redpine Signal to produce an 802.11n chip that can run on FOSS drivers, but recent announcements have not included any mention of Redpine Signal, so I suspect that either the project was abandoned or it is not yet ready. Purism probably found another Wi-Fi/Bluetooth chip that it can run over SDIO using a FOSS driver, but we have no idea what it will be and how well the Bluetooth will work.
I don’t know what kind of software issues are involved with making convergence work, but given all the major challenges that Purism is facing, I wouldn’t bet on convergence working well from day 1 when the Librem 5 is released. Frankly, if you need convergence, you are probably better off waiting.
Purism has shown with the Librem 13/15 that it will keep improving the software over time, so I wouldn’t give up on convergence if it doesn’t work very well on day 1. However, I don’t see a clear path for Purism to make convergence work well on the hardware side. If Purism moves to the i.MX 8M mini in future versions of the Librem 5, it will have much better energy efficiency, but it will have even less ability to do convergence. The i.MX 8 QuadMax has the horsepower to run a PC, but it would never work as an mobile SoC for a phone.
It looks very unlikely that NXP will decide to make a powerful mobile i.MX 8-series SoC, and the i.MX 9 is probably 4 years in the future. The best prospect for convergence is the Rockchip RK3588 (4x Cortex-A76, 4x Cortex-A55, Mali G52 GPU, 8nm FinFET) which is due in Q1 2020, but the free Lima GPU driver will have to improve a lot, and we have to pray that Rockchip will release info about the chip at http://opensource.rock-chips.com like it did for its previous chips. Hopefully, Google will decide to make a Chromebook reference design based on the RK3588 like it did with the RK3288, so we get Libreboot support for it.
that’s why Todd Weaver said “converging on convergeance” and not merely convergeance from day 1… but it’s gonna’ be darn close to it…
https://www.youtube.com/channel/UC64-PJ-yoF7aJ9pIHWEbrTQ/videos
I do almost everything I need on a chromebook running linux. The hardware doesn’t seem that under-powered for my needs.
Most ARM-based Chromebooks usually have two or four Cortex-A7X processors, plus four Cortex-A5X processors, so your Chromebook probably has a better CPU than the i.MX 8M, but you actually don’t need that good of a CPU for most tasks. I expect that the Librem 5 will be good enough to do convergence for 50% of people. If you use your PC to read email, browse the web, watch low-resolution Youtube videos, and write letters in LibreOffice, then the Librem 5 will be adequate. Maybe I should have made that clear in my post.
I just plan on escaping android . I already have had linux running on laptops since 2009 so I dont need the librem 5 to be a computer. I am just super hyped to have an android free linux phone. I hope it can handle youtube.
And ill also use it for mobile mp3 player as well . Cant wait to see a final release candidate.
This is basically 90% of my workflow. I will say too that with time and performance tuning I think it’s potential will expand. But also this is kind of just a version 1 situation. I am hopeful that if there are future iterations the convergence aspect will get better and better.
Smartphones and SOCs have been powerful enough to run lightweight distros for a while, I’m looking at you RasberryPi . We have to keep expectations reasonable. If it takes a massive video card to run xyz game then we can’t reasonable expect it to run on a much smaller device. However, mobile browsers are just as capable as their desktop equivalents, as an example.
The Librem 5 has one big benefit over an Android. Linux is compiled not interpreted. So the Librem 5 has the potential to be faster than an interpreted language OS.
Doesn’t mean it will be faster, just the potential.
Although watching the videos Librem has been posting these last two weeks, the dev kit is running desktop applications and using their Libhandy software to scale with intriguing results. Sure they could be cherry picking the videos and only a full fledged device in our hands will ultimately prove how powerful the device is but I’m and genuinely optimistic this device will be quite capable as an everyday phone and a web browsing/word processing/music player/emailing replacement.
looking at the RaspberryPi 4b
The previous generations Raspberry Pi’s ran raspian just fine. Although the new Pi4 is incredible for the price
The Vivante GC7000Lite GPU in the i.MX 8M Quad is significantly better than the Broadcom VideoCore IV @ 250 MHz in the RasberryPi 3B+, but the CPU cores in the i.MX 8M Quad are 1.5 Ghz 4x Cortex A-53 vs 1.4 Ghz 4x Cortex A-53 in the RasberryPi 3B+, so almost exactly the same.
However, the CPU performance in the i.MX 8M Quad (which is the same as the RasberryPi 3B+) will be half the performance of the RasberryPi 4B with 1.5 GHz 4× Cortex-A72, according to this Tech Republic article.
It concludes: “The Raspberry Pi 3 B+ was a half-decent desktop PC, but for everyday use the Raspberry Pi 4 feels close to my work laptop — a machine costing around 20 times the price.”
Not exactly. Most Android apps are programmed in Java or Kotlin which is compiled to Java bytecode and then executed in the Android Runtime (ART) which uses ahead-of-time (AOT) compilation that converts the bytecode into native code. However, people who write processing intensive apps in Android usually write C or C++ code that uses the Native Development Kit (NDK) which bypasses ART and executes natively, instead of the Software Development Kit (SDK) which uses ART. Here are some tests to show the difference:
https://www.androidauthority.com/java-vs-c-app-performance-689081/
So most of the Java apps end up being executed as compiled native code in one way or another.
Benchmarks with today’s Java usually show it being 10%-20% slower than C. I haven’t seen any benchmarks of bytecode in ART vs C using GTK, but I would expect a GTK app written in C to be around 30% faster than a Java app in Android, because the GTK library is pretty efficient and pure C, but the Librem 5 also supports HTML5 apps, and they will be substantially slower than a Java app for Android.
The only benchmarks I can find are these which are very outdated, since they are based on Ubuntu Touch (which used Qt 3) vs Java bytecode in the Dalvik virtual machine.
However, when we get to games in Android which are programmed in C/C++ for the NDK, I doubt that you are going to see much difference with Linux games. I doubt there is much difference in web browsers either.
Thanks for confirming me with such a detailed post. Cheers.
I’d actually take the idea even a step further. Sooner or later I want an analog watch that carries a powerful computer. That’s all I want to carry around. Monitor, keyboard and mouse stay at the office and connect to my watch when I sit down at the office table.
(Note: I don’t want to fiddle around on the tiny smartwatch screen, hence a classic analog watch. If anything, then retina projections and a conversational interface.)
For more, see: Purism smartwatch? and 2-in-1 convertible laptop?
If you are using Wi-Fi or Bluetooth to send the signal to the monitor, your bandwidth is limited, so you either have a low-resolution screen or a low refresh rate. You also will need to charge your smartwatch frequently, because it is going to take a lot of energy to send a video and audio signal over Wi-Fi or Bluetooth. You will have to run the SoC at a low frequency, because there is very limited cooling capacity in a watch, and you don’t want a hot watch against your wrist. Looking at the Apple S2 SIP, you get a 780MHz dual-core CPU,
512MB RAM and 8GB of Flash memory. Perhaps with today’s best tech at 7nm, you can get to four Cortex-A57 cores at 900MHz, 1GB of RAM, 32GB of Flash memory. I suspect that you would have to throttle that down to 500MHz, because 900MHz is only for short bursts, but you need sustained operation if generating a desktop screen. The continuous Wi-Fi operation is going to kill your thermal envelope. You could check your email, but watching a full screen video in 1080p at 60fps probably isn’t possible.
Convergence can only go so far. The main point that might be limiting the whole IT field is the brick wall that is predicted to come up, around 2020, regarding the limits of scale of the circuits in the processor and all other nano to pico scale devices. The atomic scale circuits have too much cross talk between adjoining signals, to make that smaller scale workable. Something other, new will have to be found in all those inventions, that many are supposedly working on, to get past that brick wall of further scaling things down any further. Like adding more processors per device.It can be shown that Standard Quantum Mechanics has never been used to make any devices that work according to that particular theory, to answer this question. But that is another topic about physics as such.
i for one haven’t exhausted the current potential of the technology i’m currently ON so why should i care about how far convergeance CAN go? i’m quite certain that there are people in this world who get paid to HAVE concerns regarding this issue (though i don’t know of what their level of interest in ethics/philosophy is)