Synology nas deleted files

Ive been re encoding my .wtv files to .mkv to play over my home network. Once completed I copy the .mkv over to my synology ds418 nas via its web browser home page. everything was going well until today, Nearly all of the movies have been deleted and there is no recycle bin for the movies folder so theyve gone. Ive spent over a week transcoding/copying the files but 2 days ago I mapped the video folder of my nas to pureos folders, Im thinking it is this action that has gotten the nas confused because all week its been ok. are there any issues mapping folders in pureos?

the nas has ext4 as the file system and so does pureos. Non of my other folders have lost stuff, only the video folder which is why im thinking its a pureos bug.

Thoughts?

Mapped using what protocol? That is, how are the folders exposed by the NAS to a client?

Typical NAS devices often give clients the choice of a range of protocols.

I am a little confused by your comment that you copied files over via your web browser - but I donā€™t claim any knowledge of your specific NAS.

That is typically the case for remote file systems.

Probably noone knows as much as you do about what your setup is and what you have been doing.

I wouldnā€™t necessarily assume a PureOS bug, for two reasons:

  • a widespread PureOS bug that completely wipes out lots of files would likely have been noticed :slight_smile:
  • it is also possible that a difference in the client (in this case PureOS) has exposed a latent bug in the NAS i.e. the bug, if any, is in functionality that other clients arenā€™t using - or that you have yourself not used previously

What RAID setup, if any, do you have? In other words, have you eliminated the possibility of the failure of an individual disk? Do you have visibility of the disk health for each disk? How many disks? What kind of disks?

Does your NAS give you the possibility of verifying the file system integrity on the NAS?

What other client devices are you using? (presumably with greater success)

If PureOS is mapping the folder using a protocol that supports read-only access, and if read-only access is sufficient, you may build confidence in PureOS by doing so. (If read-only access is not available then you may be able to get a similar result using ā€œpermissionsā€, if the NAS offers that.)

Are you game to attempt to reproduce this scenario? (perhaps doing only a few movies)

Hi Kieran, the nas is basically a Linux computer without a display connection, you access it via a web browser using its IP address where you get a fully fledged desktop environment. Originally I would open its file explorer in a web browser window and drag files from pureOS to the browser and it copies it no problem, but it does use a fair amount of resources this way. All the drives are in good health, the unit is 8 months old. Iā€™ve got 4 WD red drives with 1 drive fault tolerance. Iā€™ve no idea how to do a chkdsk type command on it if its possible, I had a look yesterday but I canā€™t find anything.

I will try to reproduce the bug this afternoon. Its been great up to this point.

I have it set up todoo file sharing via smb. 2.0 as 1 and 3 are insecure. I tried NFS but I couldnā€™t find it on my network.

In that case, see whether you can make the share available read-only via SMB.

If a client is able to wipe out all the files from a share that is only available read-only then that would be a fairly serious error on the server. (If the client is correctly limited to read-only access then you can be sure that the client is not responsible for any loss of files.)

In any case, on the principle of ā€œneed to knowā€, if the client only needs read access to the videos (once converted) then that is all the access that any client should have.

My gut feeling is it wasnā€™t deleted by the client, something in the nas got confused and over written the files because I did them in alphabetical batches and theyā€™ve been deleted randomly.so even if it was my error Iā€™d know it by what was missing ie all M movies. Its lost over half a terrabite.

Without knowing the NAS too well or how itā€™s configured Iā€™d be curious if you have or can enable snapshots (I know ZFS for instance can handle this and it is my, potentially flawed, understanding that Synology can be set up with ZFS).

Separate from that Iā€™d be curious to review the copy process for any potential issues there. Iā€™ve had code that loops through without resetting all the variables at the start of each pass of the loop result in weird results.

For instance it may be that 2 folders have similar names with different content and one is copying but not the other or something similar.

I doubt itā€™s the OS nor the NAS itself but thatā€™s just my personal gut feeling.

I would definitely go looking for that chkdsk-equivalent (fsck).

Do you have shell access to the NAS?

So I have been playing around trying to get it repeat its previous behaviour and so far I cant get it to do it.

I tried to copy the contents of what it lost from my external hdd but many times it wouldnt copy. I tried restarting the NAs, still wouldnt copy, so I restarted PureOS and I was able to copy some folders but not all of which I selected, see photo attached:

here you can see I selected around 13 folders but only 6 got dragged across. To recap Im dragging and dropping from ā€˜Filesā€™ into a firefox tab for the nas into Recorded TV folder.
Do you think this is a bug in PureOS? my gut feeling is it is.

If I have shell access I dont know how to find it, Ive been through the control panel and cannot see it, although I can see a terminal option:

but i dont know what to do with it.

You should keep an open mind until you have evidence. At the very least it could be a Firefox problem. Since we know zero about how the web-based interface works, we donā€™t have much to go on.

For fault isolation, Live Boot Ubuntu (or some other distro) on the client computer and copy the exact same files over using the same interface. If it fails then itā€™s not a PureOS problem.

Obviously we are chasing two different problems here: 1. A bunch of files disappeared. 2. Failing to copy over (but you donā€™t say what ā€œwouldnā€™t copyā€ means - error message? hangs? ends with incomplete copy?)

Based on the second screen image, tick ā€œEnable SSH serviceā€, then bring up a terminal on the client computer, then type the following command on the client computer

ssh user@192.168.1.100

where user needs to be replaced with the name of an account on the NAS that is in the administrators group.

If that works at all, you are then at the shell prompt on the NAS. Bear in mind that the purpose of having a shell was to do fsck for the purposes of investigating the first problem. However you donā€™t want to do fsck on a mounted file system but it may be fairly messy to attempt to unmount the file systems other than the root file system (you would never do the root file system this way) while the NAS is operating. So it would be very much preferable to find how to ask the NAS to do this.

Many Linux systems will periodically do fsck automatically at reboot. So just rebooting the NAS a few times may get it to check the file system(s).

Another quick check would be to try to copy one of the folders that got left out by itself and see if it does or doesnā€™t. You could also use the SSH service to do a sftp transfer via the command line which might provide an error message if the transfer fails.

1 Like

I wondered about that too. The presence of SSH does not necessarily imply that the SFTP subsystem is present / enabled. However maybe it is hiding in those ā€œAdvanced Settingsā€. Maybe @ItsTheSmell can sniff that out.

Not that this is True Evidence, but I have seen the correlation more than once and successfully used SFTP by enabling SSH (I forget what OS that was, though). Still worth a shot, I think.

Yeah, defs. SFTP is a subsystem within SSH. Without SSH you probably donā€™t ever have SFTP. However given that this is a NAS running a who-knows-what-exactly distro, we donā€™t really know what to expect in terms of packages and default config.

On my computer in /etc/ssh/sshd_config you will find the line

Subsystem sftp /usr/lib/openssh/sftp-server

and I believe that is the default on my distro (i.e. I did not change that). However the comment above it implies that the SSH server by default does not enable any subsystems - so that line needs to be there if you want SFTP.

That, or perhaps itā€™s just a configuration so that the user can specify an sftp server that isnā€™t openssh?

Not to be argumentative, Iā€™m just conjecturing aloud.

What I donā€™t really understand is that you specify that you are using SMB but then you state that you copy your files using the browser interface. I also own a synology NAS but I would not recommend copying large sets of files this way as you are basically handing of copying to your webbrowser instead of using a fairly stable protocol like SMB/NFS. As stated in the Synology docs: Web browsers have different upload capacities. So its fairly hard to tell what might cause this issue, but it might be an upload size/count limit of the browser.

1 Like

Iā€™m curious if the files were ā€œdeletedā€ or if they were never copied to begin with and you didnā€™t notice/werenā€™t notified so it seemed they succeeded and were deleted but in reality never copied over.

As others have stated trying a different copy method than the browser would be a solid next step as well as USB booting a live distro to test if it is the OS (though Iā€™d use scp to test command line file transfer over SSH before USB booting personally)

could it be that the storage medium is full or very close to it ? iā€™ve run into this issue recently and all kinds of weird stuff was happening ā€¦ browser extension not working anymore etc.

after iā€™ve freed stuff up it magically just works again ā€¦

Most browsers with html5 upload capabilities have a hard cap of 2~4GB so I would first just copy the files over using a normal mounted SMB share instead of trying to use a browser upload.

Most likely there is no real issue, besides unwanted side-effects from using a browser. I have been using synology and various debian based distributions for a long time and never had any real issues, besides some nautilus/gvfs annoyance.

Iā€™ve only had the nas for a few months so Iā€™m still learning. I know little about how network shares work other than smb 1.0 is dangerous. I disabled everything I donā€™t need and try to run the minimum thatā€™s needed for it to function. The files that have gone missing have definitely been lost, I could see the size of used space against free space and I was up to T because I was doing it in alphabetical chunks. I had over 300 films on Saturday morning but by Monday it down to 45~ and the volume data was reduced by half a terabyte so I know its gone and not just been moved. Up to this stage its been running well. I got pissed on Saturday but my gf swears I donā€™t go near it other than through my Oppo UDP 203 uhd network player. This has no capability to delete files that Iā€™m aware of. When I send a file via files/smb what takes charge of writing the data, the client PC or the built in nas OS? Synology have looked at the logs and all they can see thatā€™s untoward is the NTP protocol is disabled and I have network errors, but surely these wouldnā€™t lose my data?

I canā€™t think of a scenario where NTP would cause data loss; time being far enough out of sync can cause plenty of other problems though. Iā€™ve not heard any recommendations to disable NTP and I would be skeptical of any.

As far as the players ability to delete files, I donā€™t know that player but I would have it use an account that has read only access to access the NAS just to be on the safe side.