Unfortunately, as long as the operators build each their own antenna network there will be problems with having any signal at all. I need two SIMs to have coverage in the forest where I work all winter. Stupid system.
I really hope that Librem 5 has a good antenna or else it will be in very restricted use - near the WiFi of our fiber net or when I go to town (happens seldom). There is no band available for your own base station either (although I know that Nokia makes small “personal” base stations).
ARM claims that the Cortex-A35 is the most energy-efficient processor ever, with a target power consumption of no more than 125 milliwatts (mW), and it has already achieved 90 mW at 28nm process and 1 GHz frequency. So with the 14nm process, under the premise of constant or even lower power consumption, it is easy to exceed 2GHz. This leads myself to already available (R&D) NXP i.MX8QuadXPlus processor currently on 28 nm FD-SOI and therefore too big for Librem 5 v2 but with i.MX8QuadXPlus production hopes on 14nm process may shrink/change everything. This is of course just another spec-ulation that may work, yet not tomorrow. Expert insight is welcome, please.
Hmm. If you double the clock frequency, you edit: double (not quadruple) the power drawn. So, 90 mW @ 1 GHz means 180 mW @ 2 GHz.
Power is also directly proportional to driving voltage squared (edit). Not sure how that changes with feature width, but assume you could cut it in half when going from 28 nm to 14 nm? If so, 65 mW @ 2 GHz and half the voltage.
Edit: This is entirely theoretical, and based on
P = U∙I (power equals voltage times current) and
I = f∙(constant∙U) (frequency times charge, for a capacitive load)
From my experience working with chips (I have only did simulations, never taped down). One generation of fabrication processes gives you ~30% power saving given the same design, minor clock boost and ~25% better density.
AFAIK, the standard A53 core design can never go above 1.8GHz (on modern process, due to it’s relatively few pipeline stages and long critical paths). And, the A53 core’s area can grow 30% from a low power design (~1.2GHz) to a high frequency design (~1.5GHz).
So, assume NXP didn’t go nuts with their layout. They can either build a smaller and less power hungry chip with the same design. Or they can build one with a higher frequency but same power efficiency and space. (Or they could put more devices on the chip)
Just my experience working in academia. I might be horribly wrong.
This comes from a formula in basic electronics. The power consumption of a transistor P = a f C Vdd^2 Where f is the frequency, a is the switching activity (so a*f is the average switching frequency),C is the effective capacitance and Vdd is the voltage.
Edit:
Fundamentals of Microelectronics (2007) by Behzad Razavi, page 802.
This formula is an ideal case, assuming a perfect CMOS gate with no leaking current. But that’s never the case.
In practice. it is complicated. The maximum frequency is determined not only by power, but also how fast the signal can propagate, etc… But doubling the frequency above what the vendor/circuit compiler gives you will take far, far more than 4x the power if not impossible.
Underclocking works the same way, some circuits are not stable when the clock speed is slow and you can’t underclock some timing sensitive devices (real-time cores, DRAM PHYs, PCIe(?)). So halving the CPU’s operation frequency will not give you 0.25x power draw.
Sorry folks, I was going by memory, and well… I got things backwards. Power is proportional to frequency and proportional to voltage squared, not the other way around.
Also, this is only useful for eyeballing, as it’s based on a very simple model: a CMOS circuit is basically a capacitor that gets recharged at the switching frequency. In reality, stuff happens and it gets complicated.
According to the simple model, power is the product of voltage and current. Current, in turn, is proportional to frequency and voltage (I = C∙U∙f). I.e. P = U∙U∙f∙constant .
I have edited my reply above to reflect this. Sorry for the confusion.
Why anybody absolutely needs 8 GB of RAM on a smartphone eludes me. I am on my Tuxedo Laptop now with MX-18 and have 16 GB of RAM some 14 of which are sitting unused while I have three applications up and running besides this browser.
I think you’re forgetting it’s a phone, and some, maybe even most, are gonna use it like a phone. They’re going to leave instances open, and play games, and have multiple things running in the background all at once without thinking about it. Linux isn’t ram hungry like most, but I’ll wait till I see how 3gb serves the v1 before I judge, though my gut says 6gb would be better for what they have in mind.
If it works anything like Android, this is inescapable. You don’t quit things in Android, it goes off to the background, and you have to go to whatever the hell the call the thing that shows you what’s running and tell it that goddamn it, you wanted the app to QUIT.
Since I generally want apps to quit and quit draining my battery and using up bandwidth in the background, this is very aggravating to me.
Yet to be seen but I liked the way it was done on sailfish - the application is either active (has focus) or inactive (automatically minimized onto desktop). Application needs to maintain its inactive state and minimized icon/animation/status (basically you have a canvas to draw on whatever you wish to be displayed on minimized state icon) but if you abuse it (update too frequently) you won’t pass QA and hence will not be allowed on app repo. of course there are side repos and there you can do whatever you want. If you don’t want the app you simply close it with simple gesture (which unfortunately has been removed in recent versions)
System/user services are just like normal daemons except need to be aware of low power state to avoid confusion (or prevent entering such state and hence draining battery). The state change is advertised via dbus and better to handle it properly as in LP all your timers and clocks are screwed - you cannot rely on waking up on the timer (poll/select/alert), only when system wakes up. So need to prep for suspend and quickly do the needful in wakeup slot.
Needless to say if any app is keeping active state (being interacted with) system does not enter LP state. Inactive apps however need to assume they are already in LP and not do any active processing.
In other words
Full linux multitasking is maintained
Apps need to care about battery, system will not force any app to sleep if it is busy
system enters LP state if all apps yield and apps/services better to be aware of it
if battery drains fast it is usually some app preventing system to go to sleep (LP) which could easily be checked by commands like top (app in D/R state)
but for people who do not pay much attention which apps they install/use not forcing sleep could disadvantageous.
I should respond and note that your preference is yours and is not mine to criticize. My use for a smartphone, being different, requires less memory. While I can chose to game-though I generally do not do so- I am unclear that it would consume much memory. I would rather, in the rare instance that I play a game, do so with a larger screen where the experience is enhanced. To use a smartphone with a monitor means either anchoring to a base or seeking a mobile experience of more heft in transport. To each his own.
WiFi 6 is Intel marketing it’s actually 802.11ad at 60GHz instead of 2.4 or 5, so it won’t penetrate walls but it will allow for way more bandwidth / channels.