That worked, thank you. Is there any significant downside from using xserver?
Thanks for the link. I am not sure I fully get it. Is there a substantive downside if I use Synaptic on xserver. Thanks for the info.
Yes, the downside is you’re running X instead of wayland. It’s not a huge deal yet. Some desktop environments aren’t adding new features to their X version (KDE), so that gap will widen into the future. On the other hand, some things don’t work right on Wayland yet (as you found). As it stands now, the major advantage Wayland has is in video playback quality. It will be tear free pretty much no matter the GPU, CPU, and monitor in use, both for videos and for games (if they don’t need the Xwayland shim). If you don’t know what you’re doing, you may be happier on X for now, as it’s more mature and has more documentation online.
All that aside, you can run synaptic on Wayland, it just can’t automatically prompt for a password to run it as root
. In a terminal run sudo synaptic
. That should work (I don’t have synaptic, so I can’t verify it doesn’t give up when it detects Wayland). If it complains about connecting to the X server, run xhost +SI:localuser:root
which tells X that root is allowed to connect to your Xwayland server. If that still fails, run unset WAYLAND_DISPLAY; unset XDG_RUNTIME_DIR
and then retry the sudo synaptic
. At that point, synaptic will think it’s running under a normal X server. Note that unsetting those environment variables will render the shell session useless when you’re done with synaptic. Best to close it right after so you don’t get confused by strange bugs.
Much thanks for the explanation and guidance!
This answer from Debian maintainer, dated 23 Apr 2019, tells a lot (while PureOS is now based on Debian 10) and should help to understand: “I think you’ve got that logic backwards. If the new display technology for Buster doesn’t work with long-established and popular packages in the archive, shouldn’t the default display technology for Buster be reverted back to the current one?”
My understanding is that X, while very reliable, is a veritable rat nest of code and very insecure. Wayland, while it isn’t universally supported yet, is much more secure. I’m not the best person to explain the technical details, but it seems to be rooted in the ancient history of X and the ability to bring up and X session easily on a remote machine. Because of this, and the fact that Wayland is the future, it seems a good choice on the part of Purism.
generaly wayland on AMD iGPU/dGPU and Purism’s intel iGPU works OK for wayland on the desktop but if you have nVidia then wayland is not a good experience.
There are 2 major technical issues with X which Wayland tries to solve.
First is security, which isn’t because of the ability to bring up X on a remote machine, it’s because X was designed before untrusted programs were even considered. The problem is that any program connected to an X server has basically total control over the X server. It can read the contents of other windows, it can read (or block, or simulate) user input. In fact, compositors for X only work by abusing this security issue, they are just regular X programs that decide to take charge of window positioning, user input, and so on. Under Wayland, user input goes directly to the compositor, which decides where to forward it, and windows can only see each other via a protocol extension (which, with all existing compositors, prompts the user for permission).
Second is performance. X started out not just as a windowing system, but also as a drawing system (think G/TK or QT). You can still use it as such. This means what is sent to X can be very low level (draw a box here, write this text there, and so on). Only problem is that requires blocking calls to the X server. For example, starting something like gedit
under X requires somewhere around 80 blocking calls to the X server. This is why remote X sessions are so terribly slow with modern applications. Also, because X was the drawing toolkit, there is no notion of a frame, drawing calls begin immediately, so you can actually watch the text and buttons and so on appear in real-time on the display. Problem is when you are displaying video or playing a game (or scrolling a web page), the monitor will often end up ready for then next frame in the middle of the some program writing the next frame, and you get tearing. VSync tries to fix this, but causes additional input latency and is often imperfect (better recently, but still not perfect). The Wayland protocol introduces the concept of frames, so the client tells the compositor “it’s ready to draw”. Which fixes the tearing issue, even on rather poor hardware.
There are a few drawbacks. First is software support. GTK and QT, as well as firefox, support Wayland. SDL has wayland support, but lots of SDL apps get input directly from X, or call X-related functions directly. XWayland works about 90% of the time, even for games. It doesn’t support variable refresh rate, though there is a proposal for an extension for that. XWayland defaults to no remote permissions, not even for root (which is why synaptic fails). It’s also only in the last several months that things like OBS can be used under some Wayland compositors.
Note that critics of Wayland will say “it’s not network transparent”, which is a red herring. X isn’t network transparent either, there are several extensions which only work on the same machine. Yes, applications can run remotely, you can even use local video acceleration and display remotely, but the performance suffers. Meanwhile, you can do exactly the same thing under Wayland, via the Xwayland shim. You can also use compressing remote protocols (h264 encoded VNC for example). Also the wayland protocol can be relayed over the network with a relatively small shim on both ends. No one has bothered to write one since there isn’t actually much demand for remote applications like that anymore.
Yes, but NVidia under linux is a bad experience anyway… If you are so inclined, you basically can’t do VFIO passthrough unless you shell out $4k for their server cards, not because of an accidental limitation, but because they specifically disable the card if it detects it’s in a VM. Likewise, if you don’t have a signed (by nvidia) driver, you can’t tell the card to run at more than its baseclock, and lowest power state, which means performance sucks. Oh, and if you use their driver, you can’t use GBM, which means no wayland, and more painfully, it means no 3d accelerated VMs via QEMU (QEMU can provide an OpenGL4 capable virtual card to multiple VMs if it can use GBM).
I think Linus Torvalds summed up the NVidia situation quite succinctly at Espoo, Finland, back in 2012.
what do they care ? their dGPU RTX cards are selling like crazy for #Blender-on-windows-users.
on the other hand so does AMD’s Epyc (2nd gen) and Threadripper (3rd gen) for gnu/linux.
Intel still has market share but hopefully that will change soon …
They don’t. That’s my point. Why should we use or worry about supporting NVidia when they don’t care about us? NVidia has something of an undeserved good reputation on Linux, since 10 years ago no one had open source drivers for GPUs on linux, but at least their closed source drivers just worked. Neither of those conditions hold anymore and they intentionally cripple their cards for us. Meanwhile, the 7nm radeon cards are amazing on Linux, and compete well against nvidia at their price range. Sure, they don’t have ray tracing cores, but they still get decent performance with ray tracing.
As for Intel market share, they’re losing it. Note that the usual figures you can find are installed base, since sales data is hard to get. Installed base takes a lot longer to change, but it’s expected to noticeably shift AMD’s direction over the next year.
heh yes ! i was referring to nVidia when i said “what do they care”
but i think it might be more profitable in the case of raytracing (if that’s what we are looking for on gnu/linux) to choose an EPYC/Threadripper instead of a 7nm AMD GPU card (these are vRAM limmited)
I dunno about that, the 7nm card I have has 16GB of HBM, and rather impressive memory access speeds. Sure, for $1400 a threadripper gets more memory, but that’s enough to buy 3 R7s at their current price.
yeah but you also need a beefier power supply unit and a better UPS not going to risk losing my work due to a black-out or frying something because of a brown out … but lol Synatipc runs fine on an Intel Purism deblobbed CPU/iGPU
Sure… but screw NVidia, right? I know that they have recently made overtures to FLOSS regarding their drivers, but they have been pretty bad up to now.
I’ll drink to that!
drink to it ? um no thanks because it’s not the same as using open-hardware/free-software … but they are damn fast and quite-useful off-line (huge computation potential)
again bad business practice to use something at half value because of on-line concerns … they’re locked tighter than a trolls ass-hole.
Can’t disagree with you there, but at least AMD has been good to Linux, better than Intel or nVidia. And their new products are a much better bang/buck deal, as well, while beginning to overtake the others in terms of performance.
both nVidia and Apple got UFO building centers so that must be why they afford to be ass-holes … for a 50% decrease in GPU render times i sell my soul to the UFO … Aliens don’t need to conquer anything they are already here
I’ve just used Synaptic (0.90.2) Package Manager under Wayland. Very short usage of it shows that it is fully operational now (no need for PureOS default session change, if I’m having right at this moment), please check.
loginctl show-session $(awk '/tty/ {print $1}' <(loginctl)) -p Type | awk -F= '{print $2}'
wayland
apt show synaptic
Package: synaptic
Version: 0.90.2
Priority: optional
Section: admin
…
– https://www.nongnu.org/synaptic/