DeepSeek: running and privacy

If you’re curious, you may lose some of your privacy…
Deepseeks (that new Chinese AI model provider) privacy statement openly admits slurping all data, including keystroke patterns or rythms: What DeepSeek's privacy policy means for your personal data | Mashable

5 Likes

They are not the only web site doing that but yes …

5 Likes

Nah… For that one, I’m not. Lol!

3 Likes

Absolutely I would not run DeepSeek non-locally (e.g. from the playstore on an Android phone).

I have, however, run some DeepSeek models locally using ollama just yesterday. My machine doesn’t have a discrete GPU or an NPU/TPU. And it only has 16GB RAM (maximum) so I’m only running small versions, but they do capture the general feel for the models.

The 1.3B (1.3 billion parameters) “deepseek-coder” model is pretty poor. I’m thinking of trying the 6.7B model, but that will probably slurp up all of my RAM and 1/3 of my swap (at which point I usually get hit with the OOM killer which often results in an unstable system).

I did run the 7B (7 billion parameters) “deepseek-r1” ( DeepSeek-R1-Distill-Qwen-7B ). It’s very nice. It’s better than the typical llama model (llama 3.2 3B), but then it does have twice the number of parameters. It’s very nice that it shows the reasoning along with providing the answer … which gives one insight into whether it is hallucinating.

5 Likes

Very good, the safer way! The policy was for their site and the model(s) is probably a bit more controllable in your environment (but it’s still a black box).

2 Likes

15 posts were split to a new topic: Hardware for running AI models

It may not be completely penetrable, but IMO it’s not really a black box.

The “model” is just a file, not really an executable, describing the structure and providing the parameters. It uses the MIT license. I understand that, like a JPEG file, it could be crafted to exploit a bug in ollama … (although I think these “trimmed down” models come from huggingface and not DeepSeek).

The program that runs the model is ollama. Which is Free and uses the MIT license. I haven’t looked carefully at the code, so I compile that myself and run it in an lxc instance (using lxd).

lxc list
+------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
|    NAME    |  STATE  |         IPV4         |                     IPV6                      |   TYPE    | SNAPSHOTS |
+------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| llama2404  | RUNNING | 10.188.15.210 (eth0) | fd42:6334:8ab2:529a:216:3eff:fe76:2e9d (eth0) | CONTAINER | 0         |
+------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| ubuntu2204 | STOPPED |                      |                                               | CONTAINER | 0         |
+------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| ubuntu2404 | STOPPED |                      |                                               | CONTAINER | 0         |
+------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| whisper    | STOPPED |                      |                                               | CONTAINER | 0         |
+------------+---------+----------------------+-----------------------------------------------+-----------+-----------+

2 Likes