Louiss Rossman on Purism

I actually think it’s the opposite. Purism might actually be the required entity for that feature to work. They offer a SIM service, right? So they could send you a silent SMS to wake up the phone via modem if they had a service an application could subscribe to.

So the application on the sender’s device could try to make a connection/ping first and if it fails send the service a notification with some metadata which could be resolved.

Technically that should be possible and I would also like the idea to host this service yourself since it could leak some metadata about your connections (potentially revealing your contacts to the service provider). It would still make an interesting project to have as open-source solution for this feature and if you wouldn’t choose any service-provider, you wouldn’t get push notifications at all (making it optional via opt-in).

Someone can’t accept the truth and tries to shut my mouth by reporting valid content. Gratz. I mean crying about “bullshit products” without arguments is ok, but saying “don’t blame devs” is not? :sweat_smile:

@Ick the second post you mentioned was also reviewed. Several posts in this thread where reported.

If you check your phone often for Messages. You could disable WLAN or Mobile suspend with a higher usage of power, or you wait often few seconds to reestablish a connection. And you will retrieve push Notifications.

And you have to keep in mind, that Apple, Google use a third Party Cloud Service for every Push Notitification. Signal Implemented a self Hosted one (with Amazon Cloud supported) , to get this done. And if you like you could have that kind of Services hosted by yourself, from you Home Network or Nextcloud. It is however a Service for on the Road how often you get Messages from your dedicated Offline Home Systems. However, this kind of Messages could be filtered or transfer by a signal over LoRaWan or SMS or from your folks and Friends if its important for you, also through 3erd-Party Signal or Smartwatch.

Me too, and you are right about some optimization for power usage. And i am sure we will see a lot of benefit in kind of a future if more and more Linux phones/devices will emerge. But right know it is, like it is. And i think its not so bad as it got framed.

The Android and Apple Phones are highly optimized, also through to extract much privacy information on as much as possible cloud information externalization. Which Librem could not compete if they do not monetize and offer that cloud as a service possibility. So i hope they just offer one fair service or deploy a self hosted documentation too. You also could add some A.I. to increase Pictures taken by your camera… but the same Quality, as Apple have with the same Hardware, you will only get if you train an A.I. with many Pictures and optimizations proofed by payed engineers. So i have some kind of impressed view about the quality which we already emerged and see on Today’s Hardware… but you know the privacy price for this was high.

I do not know a perfect solution. Loose your Privacy of have some. Or try to recalculate every Step and why it was important to speed up the development in the right direction to a astonishing result.

As your second Point, i would like to have adjust a second small calculation to the pin, to add some two or three numbers easy to add or calculate. To secure the Pin. Like Time you enter, Modulo Time you Enter or Add half of the daiy counts in a month etc… plus a secure third number.

If the choice I must make between comfort or privacy, I would most certainly choose for privacy. Comfort is in my book always less important. I prefer to be off line as most as possible.
So no cloud service or AI inside my L5 please.

4 Likes

AI ≠ privacy issue. A fully trained AI can be used without sharing data or using internet.

6 Likes

I dissagree :blush:.
AI needs collected data for training. Where does that data come from? Also, for improving it needs more data, so one needs to keep collecting. Next issue is that AI tends to make discriminating desissions. Examples enough to be found on the web. Another privacy issue is that I have no say on the collected data, despite it’s partialy about me. There is more privacy involved as you may think of. Also, the outcome of an AI desicion can and will be used in favor of the happy few. Not in favor of you and me.

2 Likes

You’re thinking about a special use case of AI. And for those use cases you’re right. Voice analytics, text to speech with “stolen” voices, face analytics etc etc.

But you forget about all the other useful AIs, that can optimize performance of specific computer tasks. For example animation of characters or cloth animations in 3D computer games. The company has to train it once and don’t need to train it further once it’s shipped. Training data comes from the dev environment or in “worst case” motion capture of professionals (and those can be strict restricted), not from personal datas of users. The AI runs in background of the game, but don’t need to share data. It speeds up animations while also making better looking results. No complex physical calculation anymore. And 1% failures are no problem at all (1% is a huge amount on AI-cameras of the public surveillance), because a not so perfect animation wont even be recognized.

It’s all about the use case and if the app can be trusted (or sandboxed without internet) or not.

5 Likes

As you are too :wink:
Again, to me, AI is about collecting behaviour of people or the way people use software, devices etc.
As such, it will always come down to collecting data about persons.

2 Likes

Nope - I’m thinking about every possible AI. As I said above:

That does not mean AI is never a privacy issue, but also that there is no automation of this behavior. I wrote the word “can” in cursive to make it clear that either ways are possible.

You strictly combine AI with user data to train AI. That happens because other use cases of machine learning are not in focus of public discussions. There is no need to speak about AIs that doesn’t affect humans and have no ethical impact therefor. Some use cases doesn’t even require millions or billions of training data and training steps. Sometimes it just need 1 set of training data (given by devs themselves) and 2000 training steps to create a perfect AI for a specific use case. AI is not as evil as Google. AI is more a tool that can be used in a good and in a bad way.

It’s important to understand this, because it will be part of the future FOSS world that’s still privacy respecting. I’m fighting against every discriminating and data stealing AI (and the companies behind), but still saying what I wrote above.

6 Likes

Thanks for the reply.
I’m still not convinced, but we live (somehow) in a free world where it is OK to have different points of view.
I do truely hope you are wrong and that AI will never be used in Puri.sm software/hardware. If this will be the case I will no longer use their products.

1 Like

You also just could get rid of that piece of software you don’t want to use. The good thing about Purisms products: no need to use PureOS on Librem 5 (for example). I don’t think you will do yourself a favor, but you’re right. As free people we can do what ever we believe it’s best for us in this context.

2 Likes

As a regular viewer of Rossman’s channel… after consideration and the more I think about it, I can’t take his side.

  • Use of the word “scam” is possibly one of the most embarrassing and ridiculous things I’ve ever seen come from his channel, and I lost some trust in the quality of his research since this. Here I am holding the damn physical device, reading or listening to people claiming the company is scamming people out of their money. Huh? A scammer would have been GONE and out of site LONG ago. Using that term is possibly even libel (but the last thing I want to see is lawyers getting involved here, enough damage is done).

  • The delays have been obnoxious, but we were hit by a black swan event called “covid” which severely impacted hardware availability. I remember the public notices from Purism. I believe there were still deadline problems regardless, but it’s unfair to view this without a covid lockdown lens. This event is responsible for many of the tensions with refunds.

  • It’s not as black and white as he seemed to suggest. IIRC, LR presented a straw man argument when he compared issuing a refund to one of his customers to this situation. One is a minor financial loss, the other is an extinction level event for your company. I wonder how he would really respond given the same choice.

If the choice was between total collapse (devices never ship, and all the consequences that follow) or effectively cancel/delay refunds so everyone gets what they agreed to buy… put it this way-- I’m glad they’re still shipping devices out.

It’s more unfair to punish those who were patient (we all knew there would be a wait) in order to refund those who paniced earlier and requested a refund. And on top of that, that adds yet another failed Linux phone project to the long list of failures. But I guess we aren’t allowed to factor “greater good” things like that into our decisions at all because of Louis’ strange “assume you’re the bad guy” logic. :roll_eyes:

Denying or delaying refunds to people who were technically entitled to one is not fair either, but they did click that order button. Other than timing, what changed? The product is late, but it will eventually go out. The current path is the least bad of the assumed choices. Hell, you can probably resell this handheld computer for a profit. You haven’t been “scammed”; using this term is absurd.

Consumers place too much emphasis on “customer is always right” and not enough “buyer beware”. There’s risk associated with any pre-order. If the customer had a deadline or financial need to have a phone by a certain date, that’s on them. They signed up to get something at “we don’t really know when”, but they shouldn’t have any accountability when it takes longer than they imagined? You don’t back out of a project after your contractor spent thousands buying the supplies.

In hindsight, maybe the refund policy should’ve been explicitly “no refunds” from day one, or after the project had progressed beyond a certain point.

6 Likes

Apparently, based on the responses from many on this thread, elsewhere on the community forums, and the subbreddit, it is far more easier to complain, spread hate, and cause flame wars than to own up to their purchasing expectations/decisions from Purism. This sense of entitlement was largely why I lurked the Purism community forums for over 4 years, as I have no tolerance for such behavior.

I have no faith in consumers choosing to be accountable to their purchasing decisions anyways. If the pandemic has proven anything, it is that people are unwilling to change their behavior, decisions, and priorities at the expense of everything else; wars, floods, and wildfires included.

5 Likes

You have a valid Point Ick, but its like the main focus of technology. We can or could have distributed privacy enhanced decentralized Software, if privacy would be respected by the algorithms.

But in reality it is not. Because at Shoshana name it, its a Surplus income to this companies. You pay an additional fee with your data, and you increase the result, fitness, attractiveness of a service if it works faster, better. Cause you have more group collected data to match the diversity line. To tune on fine individual mistakes in language and spelling (for example).

So we see A.I. is a privacy issue. All the big LAMAS got trained by private social media postings and free Internet-Data. And it IS a security issue if someone reverse-engineer the trained data back to light. So if your Phone Alexa collect private information from your behavior and you build up a relation ship to it. And yes its fine if you have done that offline with a free model, but if someone collect that data and steal it from some unpatched device or your backup or, like 99,99 Smartphones it stored it in an online Cloud Backup. Its not “AI ≠ privacy issue”. Sorry to say this.

Just one sentence to find an end for my too long post, its just we interact with that devices, and in most cases the trained A.I. was not fully trained by ourselves and we have to trust the lessons learned before.

Interesting that you disagree with my first post, but gave a heart (some time before) to my later explanation post. It’s like you contradict yourself. :wink:

And the explanation post already answered everything you spoke about on your post.

1 Like

Wow and yes… but. I am not sure if we can reach this (now or in near future, but is is a good goal). Because you can’t less use a computer which steal your privacy daily or just use small code based free and Open Sourced Software. But i am not sure if we can have a World in future with Offline, A.I. result. In best Case, like right now, we end up with filters, Ad blockers, and teaching mathematics and scripting and how to use computers to children. They are growing up in a world with Smartphones, Internet, Google, Facebook and Amazon. You can try to deregulate A.I. for, but then it have no access to Internet or your Devices leaking information. Its usual that you expose your children right now to Systems and Applications which try to learn from you or there behavior.

Right now its hard because optimized Camera Drivers for phones, are still integrated in Software and Devices.

Edit: Ick, its because every new day i see and conquer the new reality we already live in. I suggest that with the Vacuum Cleaners, but its hard to keep optimistic if your fears got covered by someone who reverse engineer code and hardware later.
You are right about A.I. will have some big good points too and i hope that our open Source Community can keep up. All i see is just that it takes much more time, but have in the end the good/better product/software. Its just hard to see daily so many folks not using this actually better existing alternatives… and every News about some commercial Smartphone, thinks that Push-Notifications and Apps (with data leaking sdks) are just wow.

It’s no goal. It’s like two sides of a coin. You can use AI in either way, but how to use AI depends more an the needed results then on the actually wanted training data. For some results it wouldn’t even make any sense to steal data.

(1) AIs that need huge amount of data (for example analysis of biometrics) need to steal huge amount of datas. (2) AIs that need little amount of data or data in an IT-environment where user input is irrelevant, will be produced directly without external data collections.

It’s a huge difference if AI has first needs or second described needs. For the first kind you always need external data. You may can collect with people who’re willing to share (donate) and therefor you can design it “more user friendly” etc. But you also have always those troubles with discriminating, if you don’t balance data input well enough. All those things have no impact on the AIs with second needs.

And I want to remember that I said “not every AI is an privacy issue”. I was mainly speaking about AIs with second needs where collecting data from users would make no sense at all (not even for big tech). The AI with first described needs can be done in a much better way than big tech companies are doing, but issues are never fixed completely and here I totally understand your and shopping4purisms concerns. But as I said, that’s just one side of the coin.

1 Like

As LR is neither A nor I, and your discussion about AI very interesting, I would suggest you to create a new topic. Thx you all to share your ideas, thoughts and opinions.

1 Like