A new security risk for enterprises, which will be very difficult for them to address:
Ok, I will be cynical and say my true feelings: Schadenfreude…
A new security risk for enterprises, which will be very difficult for them to address:
Ok, I will be cynical and say my true feelings: Schadenfreude…
I guess this is the other side of the automation security nightmare as compared with chatbots - where a ‘customer’ of a company might trick or manipulate a chatbot into compromising the security of the company, at least to the extent of e.g. releasing sensitive information inappropriately.
They cannot recognize visual warning signs like suspicious URLs, excessive permission requests, or unusual website designs that typically alert employees of a malicious site.
Don’t know about the employees where you work, but where I work I would definitely not assume that my colleagues would be alerted by any of that.
(We have to complete approximately weekly security training modules and after completing each module it shows the employee what percentage of employees went for the right answer and for each of the wrong answers.)
Or, as we write in English, schadenfreude.
OK, I hope I’m not the only one wondering . . . What is a “Browser AI agent?”
Maybe read the article? It explains this quite well and why this is a big problem.
I find that - generally speaking - awareness has greatly increased through mandatory security training.
The enemy now seems to be stress: constant notifications, tasks hopping, pressure from colleagues or superiors; people under stress make more mistakes or “cut corners”.
Of course I read it . . . but thanks for the snide comment.
Browser AI Agents are software applications that act on behalf of users to access and interact with web content. Users can instruct these agents to automate browser-based tasks such as flight bookings, scheduling meetings, sending emails, and even simple research tasks.
It explains what it does, but not what it is.
“A saw is a tool that cuts materials like wood or metal into two pieces” could also be a water jet cutter or an axe if you have no idea about the word “saw”.
There are a few different ones that have been integrated into browsers but also those that can use the browser. The obvious one is Microsoft’s Edge with one of the Copilot variations and Copilot as a more integrated service in the Office360. I’d extend the “browsers AI” definition to maybe “general office task assistant” as similar problems arise for instance when these general versions of Copilot (ans others) are used for instance to sort or correspond to email etc. (AI agents wrong ~70% of time: Carnegie Mellon study • The Register). “Browser AI” is a good example though, as many interfaces to data and services are used via browsers, so AIs (like users) have easy access - unlike separate apps, that can set better controls. On the other hand, part of the point of the article is that these assistants are added to software and they appear in updates often without any notification to organization’s IT department and without any centralized tools to log them, control them or set limits to what data they can use.
For a different example and comparison, Firefox has an optional assistant too, Orbit, but it is much more limited, as it’s limited only to the content of the page (summarize, ask questions about it). It’s not without some potential of new risks opening up, but at least it’s not forced on user and it’s more strictly limited. Unfortunately for organizations and their IT security teams, there’s no centralized control, if I’m not mistaken. It’s also not as usable as the one mentioned in the article. [Edit to add: Got notification that Orbit is no longer supported and will be removed, so one less thing to worry about I guess.]