I recently implemented a backup workflow for me. I heavily use restic for desktop backup and for a full system backup of my local server. It works amazingly good. I always have a versioned backup without a lot of redundant data. It is fast, encrypted and compressed.

But I wondered, how do you guys do your backups? What software do you use? How often do you do them and what workflow do you use for it?

  • Vintor@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    17 days ago

    I’ve found that the easiest and most effective way to backup is with an rsync cron job. It’s super easy to setup (I had no prior experience with either rsync or cron and it took me 10 minutes) and to configure. The only drawback is that it doesn’t create differential backups, but the full task takes less than a minute every day so I don’t consider that a problem. But do note that I only backup my home folder, not the full system.

    For reference, this is the full line I use: sync -rau --delete --exclude-from=‘/home/<myusername>/.rsync-exclude’ /home/<myusername> /mnt/Data/Safety/rsync-myhome

    “.rsync-exclude” is a file that lists all files and directories I don’t want to backup, such as temp or cache folders.

    (Edit: two stupid errors.)

    • dihutenosa@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      Rsync can do incremental backups with a command-line switch and some symlink jugglery. I’m using it to back up my self-hosted stuff.

    • everett@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      only drawback is that it doesn’t create differential backups

      This is a big drawback because even if you don’t need to keep old versions of files, you could be replicating silent disk corruption to your backup.

      • suicidaleggroll@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        17 days ago

        It’s not a drawback because rsync has supported incremental versioned backups for over a decade, you just have to use the --link-dest flag and add a couple lines of code around it for management.

  • zeca@lemmy.eco.br
    link
    fedilink
    arrow-up
    2
    ·
    17 days ago

    i do backups of my home folder with Vorta, tha uses borg in the backend. I never tried restic, but borg is the first incremental backup utility i tried that doesnt increase the backup size when i move or rename a file. I was using backintime before to backup 500gb on a 750gb drive and if I moved 300gb to a different folder, it would try to copy those 300gb again onto the backup drive and fail for lack of storage, while borg handles it beautifully.

    as an offsite solution, i use syncthing to mirror my files to a pc at my fathers house that is turned on just once in a while to save power and disc longevity.

  • suicidaleggroll@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    17 days ago

    My KVM hosts use “virsh backup begin” to make full backups nightly.

    All machines, including the KVM hosts and laptops, use rsync with --link-dest to create daily incremental versioned backups on my main backup server.

    The main backup server pushes client-side encrypted backups which include the latest daily snapshot for every system to rsync.net via Borg.

    I also have 2 DASs with 2 22TB encrypted drives in each. One of these is plugged into the backup server while the other one sits powered off in a drawer in my desk at work. The main backup server pushes all backups to this DAS weekly and I swap the two DASs ~monthly so the one in my desk at work is never more than a month or so out of date.

  • melfie@lemmings.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 days ago

    I currently use rclone with encryption to iDrive e2. I’m considering switching to Backrest, though.

    I originally tried Backblaze b2, but exceeded their API quotas in their free tier and iDrive has “free” API calls, so I recently bought a year’s worth. I still have a 2 year Proton subscription and tried rclone with Proton drive, but it was too slow.

  • MentalEdge@sopuli.xyz
    link
    fedilink
    arrow-up
    1
    ·
    17 days ago

    I recently switched to Kopia for my offsite backup solution.

    It’s apparently one of the faster options, and it can be set up so that the files of the differential backups are handled by a repository server on the offsite end, so file management doesn’t need to happen over the network at a snails pace.

    The result is a way to maintain frequent full backups of my nextcloud instance, with almost no downtime.

    Nextcloud only goes into maintenance mode for the duration of a postgres database dump, after which the actual file system backup occurs using a temporary btrfs snapshot, containing a frozen filesystem at the time of the database dump.

  • beeng@discuss.tchncs.de
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    17 days ago

    Borg to a NAS.

    500GB of that NAS is “special” so I then rsync that to a 500GB old laptop hdd, of which is is duplicated again to another 500GB old laptop hdd.

    Same 500GB rsync’d to Cloud Server.

  • lattrommi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 days ago

    I want to say I’m glad you asked this and thank you for asking. In this day and age there are a lot of valid concerns for privacy and anonymity and the result is that people do not share how their system(s) work, not openly or very often. I’m still fairly new to Linux (3.5 years) and at times, I feel like I am doing everything wrong and that there is probably a better way. Posts like these help me learn about possible improvements or mistakes I might have made.

    I previously used Vorta with Borgbackup locally, automatically backing up my Home (sans things like .cache and .mozilla) to a secondary internal drive every other day. I also would manually back up a smaller set of important documents (memes and porn #joke) to a USB flash drive, to keep on my person, which also would be copied across several cloud storage providers (dropbox, mega, proton), depending on how much space their free versions provided, with items removed according to how much I trusted the provider.

    Then I built a new system. In the process of setting it all up, I had a few hiccups. It took longer than I expected to have a stable system. That was over a year ago (stat / …Birth: 2024-02-05 04:20:53…) and I still haven’t gotten around to setting up any backup system on it. I want to rethink my old solution and this post is useful for learning about the options available. It’s also a reminder to get it done before it is too late. Where I live, tornado season in starting. I lost a lot in 2019 after my city had 4 tornados in one day.

  • limelight79@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    17 days ago

    My kmymoney file goes on an old compact flash memory card.

    My home directory (including that file), /etc, databases, and a few other things get backed up weekly on to a USB stick.

    Media raid array is automatically backed up to a large drive in another computer each evening. (The raid5 array isn’t that large. It was when I built it, but now I can buy a single drive that is nearly as large as the array…)

    Pictures are backed up to Amazon’s glacier deep freeze. I pay about $1/month to back up all of my pictures. I intend to put other important things there too but haven’t gotten there yet.

  • tasankovasara@sopuli.xyz
    link
    fedilink
    arrow-up
    1
    ·
    17 days ago
    • daily important stuff (job stuff, Documents folder, Renoise mods) is kept synced between laptop, desktop and home server via Syncthing. A vimwiki additionally also on the phone. Sync happens only when on home network.

    • the rest of the laptop and desktop I’ll roll into a tar backup every now and then with a quick bash alias. The tar files also get synced onto home server’s big file system (2 TB ssd) via Syncthing.

    • clever thing is that the 2 TB ssd replaced an old 2 TB spinning disk. I kept the old disk and set up a systemd thing that keeps it spun down, but starts and mounts it once a week and rsyncs the changes to the ssd over, then unmounts it so that it sleeps again for a week. That old drive is likely to serve for years still with this frugal use.

  • Radioactive Butthole@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 days ago

    I have a server with a RAID-1 array, that makes daily, weekly, and monthly read only btrfs snapshots. The whole thing (sans snapshots) is sync’d with syncthing to two rPi’s in two different geographic locations.

    I know neither raid nor syncthing are “real” backup solutions, but with so many copies of the files living in so many locations (in addition to my phone, laptop, etc.) I’m reasonably confident its a decent solution.

  • bitcrafter@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    16 days ago

    I created a script that I dropped into /etc/cron.hourly which does the following:

    1. Use rsync to mirror my root partition to a btrfs partition on another hard drive (which only updates modified files).
    2. Use btrfs subvolume snapshot to create a snapshot of that mirror (which only uses additional storage for modified files).
    3. Moves “old” snapshots into a trash directory so I can delete them later if I want to save space.

    It is as follows:

    #!/usr/bin/env python
    from datetime import datetime, timedelta
    import os
    import pathlib
    import shutil
    import subprocess
    import sys
    
    import portalocker
    
    DATETIME_FORMAT = '%Y-%m-%d-%H%M'
    BACKUP_DIRECTORY = pathlib.Path('/backups/internal')
    MIRROR_DIRECTORY = BACKUP_DIRECTORY / 'mirror'
    SNAPSHOT_DIRECTORY = BACKUP_DIRECTORY / 'snapshots'
    TRASH_DIRECTORY = BACKUP_DIRECTORY / 'trash'
    
    EXCLUDED = [
        '/backups',
        '/dev',
        '/media',
        '/lost+found',
        '/mnt',
        '/nix',
        '/proc',
        '/run',
        '/sys',
        '/tmp',
        '/var',
    
        '/home/*/.cache',
        '/home/*/.local/share/flatpak',
        '/home/*/.local/share/Trash',
        '/home/*/.steam',
        '/home/*/Downloads',
        '/home/*/Trash',
    ]
    
    OPTIONS = [
        '-avAXH',
        '--delete',
        '--delete-excluded',
        '--numeric-ids',
        '--relative',
        '--progress',
    ]
    
    def execute(command, *options):
        print('>', command, *options)
        subprocess.run((command,) + options).check_returncode()
    
    execute(
        '/usr/bin/mount',
        '-o', 'rw,remount',
        BACKUP_DIRECTORY,
    )
    
    try:
        with portalocker.Lock(os.path.join(BACKUP_DIRECTORY,'lock')):
            execute(
                '/usr/bin/rsync',
                '/',
                MIRROR_DIRECTORY,
                *(
                    OPTIONS
                    +
                    [f'--exclude={excluded_path}' for excluded_path in EXCLUDED]
                )
            )
    
            execute(
                '/usr/bin/btrfs',
                'subvolume',
                'snapshot',
                '-r',
                MIRROR_DIRECTORY,
                SNAPSHOT_DIRECTORY / datetime.now().strftime(DATETIME_FORMAT),
            )
    
            snapshot_datetimes = sorted(
                (
                    datetime.strptime(filename, DATETIME_FORMAT)
                    for filename in os.listdir(SNAPSHOT_DIRECTORY)
                ),
            )
    
            # Keep the last 24 hours of snapshot_datetimes
            one_day_ago = datetime.now() - timedelta(days=1)
            while snapshot_datetimes and snapshot_datetimes[-1] >= one_day_ago:
                snapshot_datetimes.pop()
    
            # Helper function for selecting all of the snapshot_datetimes for a given day/month
            def prune_all_with(get_metric):
                this = get_metric(snapshot_datetimes[-1])
                snapshot_datetimes.pop()
                while snapshot_datetimes and get_metric(snapshot_datetimes[-1]) == this:
                    snapshot = SNAPSHOT_DIRECTORY / snapshot_datetimes[-1].strftime(DATETIME_FORMAT)
                    snapshot_datetimes.pop()
                    execute('/usr/bin/btrfs', 'property', 'set', '-ts', snapshot, 'ro', 'false')
                    shutil.move(snapshot, TRASH_DIRECTORY)
    
            # Keep daily snapshot_datetimes for the last month
            last_daily_to_keep = datetime.now().date() - timedelta(days=30)
            while snapshot_datetimes and snapshot_datetimes[-1].date() >= last_daily_to_keep:
                prune_all_with(lambda x: x.date())
    
            # Keep weekly snapshot_datetimes for the last three month
            last_weekly_to_keep = datetime.now().date() - timedelta(days=90)
            while snapshot_datetimes and snapshot_datetimes[-1].date() >= last_weekly_to_keep:
                prune_all_with(lambda x: x.date().isocalendar().week)
    
            # Keep monthly snapshot_datetimes forever
            while snapshot_datetimes:
                prune_all_with(lambda x: x.date().month)
    except portalocker.AlreadyLocked:
        sys.exit('Backup already in progress.')
    finally:
        execute(
            '/usr/bin/mount',
            '-o', 'ro,remount',
            BACKUP_DIRECTORY,
        )
    
  • Gieselbrecht@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    I’m curious, is there a reason why noone uses deja-dup? I use it with an external SSD on Ubuntu and (receently) Mint, where it comes pre-installed, and did not encounter Problems.