I am no longer using any of the solutions from my first Emacs as an X Clipboard Manager post. I am now using an automatic, event-driven solution that is just a simple shell script. This makes the Emacs kill-ring an X clipboard manager without requiring the user to run some yank operation to pull content from the X clipboard. The shell script depends on clipnotify.
#!/bin/sh id=$(id -u) while clipnotify; do if pgrep -q -u "$id" -f 'emacs --daemon'; then emacsclient -n -e "(current-kill 0)" fi done
If you want to save the kill ring between Emacs sessions, add kill-ring to savehist-additional-variables.
Here is some configuration for ~/.tmux.conf, so tmux can exchange text with the X clipboard. It depends on xclip.
bind < run "xclip -sel clip -o|tmux load-buffer -;tmux paste-buffer" # ## Copy mode keybindings bind -Tcopy-mode M-w send -X copy-pipe-and-cancel 'xclip -sel clip -i' # bind -Tcopy-mode M-W send -X copy-pipe 'xclip -sel clip -i' # #  Copy the X clipboard contents into the tmux paste buffer and insert them # into the current pane. #  Copy the selection into the tmux paste buffer and the X clipboard.
The developer summit began on Thursday morning with an unconference feel. After introductions and greetings, we got down to brass tacks by hacking in smaller groups. I took the opportunity to work with dch@ on a review, catch up on some core@ email, and work on a few port updates.Eventually seanc@ stepped up to lead a room-wide discussion. Some of the topics were:
- Core.10 expectations
- Workflow and version control
- Goals for FreeBSD 13.
For me, the most notable point from the 'Core.10 expectations' discussion was that Core.10 is planning a developer survey. To use Sean's phrase, "We are flying blindly". With a well-crafted survey, we can begin to make decisions based on what people really want. If you have questions that you think should be added to the survey, please discuss them on freebsd-arch@. Once the survey of developers is complete, the plan is to send out a version to the wider community. I have high expectations for the surveys. They have the potential to shape the direction of the project for years to come. On a related note, there were rumblings about either revitalizing or coming up with something similar to the BSDstats project, so we can get good system data to direct development. It is a waste of time and energy, for example, to maintain drivers that no one is using and conversely, it would be a shame to remove parts of the OS that are still useful to some number of users.During the 'Workflow and version control' discussion, I sensed that we (mostly) agreed and reaffirmed that we want:
- light feature branches
- intuitive reviews that integrate well with our version control system
- a self-hosted code repository that acts as the single source of truth
- GitHub pull requests that are not ignored
- smart continuous integration
- good integration with our bug management system
- many of our tools are built to work with Subversion and it likely will not be trivial to update them
- Git's history tracking of copies and moves is unpredictable
- we will lose simple, incremental commit revision numbers
- unlike Subversion, checking out a subdirectory is not straightforward
- our CVS/Subversion history has some warts, so it might be difficult or impossible to retain all our history in Git
- developers who are happy with Subversion will have to spend time learning a new tool
Later on Twitter, des@ reminded us of a post by phk@ in one of our 'Git versus Subversion' threads of pain. In it, phk@ argues that, in some respects, comparing Subversion and Git is dubious. Subversion supports the traditional features of a version control system, whereas Git is different. Git is more of a collaboration tool that can mostly substitute as a version control system. It is a worthwhile (re-)read for anyone interested in this topic.
Considering the contentious nature of the topic, the discussions were civil and technical and even though it has been discussed a few times in the past, some new, important technical questions were raised. Hopefully the developer survey will help us clarify one important point: What proportion of developers feel that the benefits of switching will outweigh the costs?
The most discussion time was dedicated to 'Goals for FreeBSD 13'. Since there are notes, I will only give an overview of what I found most interesting We want:
- Better support for automation with tools like Ansible, Chef, CFEngine, etc. We can do this by supporting 1. more .d directories, so configuration files can be added rather than modified, 2. more libraries like libifconfig, libjail, and libipfw, and 3. more configuration with UCL.
- A better user experience. Since graphics was at the top of everyone's mind, it was mentioned as an example. The right kernel module should be loaded without user intervention. In general, the graphics stack should be a priority between now and 13 and those working on FreeBSD graphics should be given as much support as possible.
- Less in base. pkgbase, which looks like it should land some time after 12.0 and hopefully before 13, will begin to blur the line between what is part of base and what is third party. For those interested, kmoore@ will be giving a talk on this subject at MeetBSD. In any case, we definitely want to replace many of the tools that cause most security issues like sendmail and nptd. For sendmail, dma (DragonFly Mail Agent) has been discussed as a replacement, but there is missing functionality that upstream does not want to add like persistent queues. Any alternative should be a simple mail sender and not a daemon. Similarly, any ntpd replacement should only be a simple client with minimal functionality, but one must-feature is leap second smearing.
- Structured log files.
dch@ gave a short talk/demo about an interesting idea to revive something similar to RedPorts, which offered package building and testing as a service. Hopefully this will get more people interested in helping out with ports. Given the many combinations of supported versions and architectures, it can be difficult to tests all those combinations on a desktop or laptop.
The conference proper started on Saturday with a keynote talk by Costin Raiciu. In a nutshell, he is working on something he calls Unikraft, which reduces the time and effort required to strip down a Linux kernel, so that it only includes the components necessary for specific target applications. I learned that these stripped down kernels are referred to as unikernels. He also described other unikernel work designed to improve the efficiency of lightweight virtual machines (VM), so that they approach the speed and size of containers. An example use-case involved creating ephemeral VM instances whenever requests came in from mobile devices. The motivation for doing this is to ensure that each request is run with a high level of isolation. When there are thousands of such instances running simultaneously, efficiency is a priority.
A few talks that stood out for me were Advances in OpenBSD packages: https is a lie, by Marc Espie, Using Boot Environments at Scale by Allan Jude, The End of DNS as we know it, by Carsten Strotmann, and FreeBSD Graphics by Niclas Zeising.
In Marc's talk he described work to 1. securely and efficiently fetch packages with OpenBSD's pkg_add and 2. separate privileges when building packages on OpenBSD. Marc is an enthusiastic and opinionated speaker, which I like (within limits) to keep my attention. There was also some good back-and-forth with the audience to tease out a few technical questions, which kept things lively. Could TLS 1.3 solve some of the challenges with session resumption?
The piece of mind ZFS boot environments offer when making changes to the OS have been well publicized. Allan described his efforts to enhance this work by wrangling boot environments and poudriere image into something that could be used to install and upgrade many of his company's servers without having to deal with mergemaster or etcupdate. A few other advantages to the approach are: 1. no need for console access, 2. failures are handled gracefully, 3. less manual intervention, and 4. releases can be created without compiling. A nice part of the talk was when Allan described the pitfalls he faced along the way. For example, using a separate filesystem for /etc, so that it can persist through an upgrade sounds like a good idea, but when /etc/fstab or /etc/rc are not available at the right time, bad things happen. Contact Allan to help with ZFS support for poudriere image or to report bugs and request features.
The End of DNS as we know it was a fantastic talk that sparked lots of interesting comments from a knowledgeable audience. Carsten did a nice job finding a good technical balance. The talk was not light on technical information, but did not venture too far into the weeds. It was interesting to hear a comprehensive review of where we are headed with such a fundamental Internet protocol.
If you run FreeBSD on your desktop or laptop, then Niclas's FreeBSD Graphics talk would have helped you appreciate the work the small graphics team does so that we can run our favourite window manager. There are different parts the team takes responsibility for: drivers, libraries, the X server itself, and a variety of other applications that sum up to about 300 ports. After giving an overview of these different parts, Niclas described some of the challenges and future work. If you would like to get in touch with the Graphics team, contact them at x11@FreeBSD.org, https://gitter.im/FreeBSDDesktop/Lobby, #freebsd-xorg on efnet, or on the FreeBSD Desktop GitHub page.
Monday was secretary-tourist-day as René Laden and I set out to play tourists. Unfortunately, Mondays are less than ideal for tourist activities, since many of the museums and other attractions are closed, so we decided to have a free walk around the city without much of a plan. Some highlights were the Palace of the Parliament (which apparently is the second largest administrative building and the heaviest building in the world), our stroll along the Dambovita River, and the city parks.
We discussed the developer summit, our roles as secretaries, and the challenges and opportunities facing the project. In one of our personal conversations, which René permitted me to share, he described some of his Autism-related challenges, in particular his difficulties with communication. On the one hand, this was surprising, because René's reports and other writings are high quality. On the other hand, the clarity of his speech can vary, but luckily René does not mind repeating himself if something is unclear the first time around. He also explained how his speech depends on how comfortable he feels in a social situation. I saw evidence of this, especially in the relaxed atmosphere of a quiet café, because our discussions were natural, stimulating and a high point of my trip.
Indeed, the talks and the developer summit were interesting, but a meaningful aspect of a conference like this for me is to step away from the comfort of the Irc/mail client and engage in face-to-face discussions with others. We spend significant amounts of time working on a common goal, so it is especially rewarding for me to make a connection with other members of the team.
This post is now out of date. The FreeBSD port/package of Mastodon, net-im/mastodon, is no longer available. To install Mastodon on FreeBSD, refer to the Mastodon Production Guide. To upgrade Mastodon, refer to the Mastodon release notes.
# service mastodon_web stop # service mastodon_workers stop # service mastodon_stream stopUpgrade your packages.
# pkg upgrade
Check the Mastodon release notes to see if any special scripts need to be run. For example, if a data migration is required, run the commands below, but do not prepend commands with `bundle exec` and never run `bundle install`, `yarn install`, or `assets:precompile`.
# su - mastodon % echo -n > Gemfile.lock % RAILS_ENV=production rails db:migrate
Do a diff between nginx-include.conf.sample and nginx-include.conf to check for changes. Update nginx-include.conf if necessary.
Return to root privileges.If you made changes to nginx-include.conf, reload the nginx configuration.
# service nginx reloadFinally, restart the Mastodon daemons.
# service mastodon_web start # service mastodon_workers start # service mastodon_stream start
Install the packages
# pkg install nginx postgresql95-server postgresql95-contrib mastodon
Enable and start required services
# sysrc redis_enable="YES" # service redis start # sysrc postgresql_enable="YES" # service postgresql initdb # service postgresql startThe Mastodon port ships with two sample nginx configuration files, a complete nginx.conf and nginx-include.conf, which mostly just includes the server block. If the web server is going to be dedicated to Mastodon, you can create a new nginx profile.
# sysrc nginx_profiles=mastodon # sysrc nginx_mastodon_configfile="/usr/local/www/mastodon/nginx.conf"If you prefer to continue using your current nginx.conf, you can add the line below to it. Make sure you put it inside the http block.
include /usr/local/www/mastodon/nginx-include.conf;In either case, you need to customize nginx-include.conf. At minimum, you will have to change all instances of example.com. Once you are satisfied with its configuration, start nginx.
# sysrc nginx_enable="YES" # service nginx start
Create mastodon database user
# psql -d template1 -U pgsql -c "CREATE USER mastodon CREATEDB;"
Switch to the mastodon user
# su - mastodon
Customize .env.productionCustomize .env.production to suit your needs, but you must at least set values for LOCAL_DOMAIN and SMTP_FROM_ADDRESS. You also have to generate secrets for PAPERCLIP_SECRET, SECRET_KEY_BASE, and OTP_SECRET. Generate a different secret for each of these fields by running the command below three time. Save .env.production before moving on to the next step.
% RAILS_ENV=production rake secretIf you are installing version 1.5.0 or later, to enable Web Push notifications, you need to generate a few extra secrets and put them in .env.production.
% RAILS_ENV=production rake mastodon:webpush:generate_vapid_key
Set up the database for the first time
% RAILS_ENV=production rails db:setup
Start the Mastodon daemonsRun the following commands with root privileges.
# sysrc mastodon_web_enable="YES" # sysrc mastodon_workers_enable="YES" # sysrc mastodon_stream_enable="YES" # service mastodon_web start # service mastodon_workers start # service mastodon_stream start
Optional Configuration in /etc/rc.confAs of version 2.0.0, values for some options can be read from /etc/rc.conf instead of .env.production. See the comments in Mastodon's three rc scripts, mastodon_web, mastodon_workers, and mastodon_stream for details.
Give users administration rightsAfter the user alice has created an account through the web interface, you can give her administration rights by using the command below.
# sudo su - mastodon % RAILS_ENV=production rails mastodon:make_admin USERNAME=alice
In this post, I describe how I manage ZFS snapshots and remote replication from a source host, phe, to a target host, bravo. I assume you have version 0.7.3 or newer of zap installed via the FreeBSD package and the sudo command is available.
Target host (bravo)
Partition the disks to be used for the backup pool.
# for i in 1 2; do gpart create -s gpt ada$i gpart add -a 1M -l gptboot$i -t freebsd-boot -s 512k ada$i gpart bootcode -b boot/pmbr -p /boot/gptzfsboot -i 1 ada$i gpart add -a 1m -l zfs$i -t freebsd-zfs ada$i done
Create the backup pool and a top-level filesystem with the necessary permissions. Make the backup datasets readonly, otherwise incremental replication may fail.
# zpool create -m none zback mirror gpt/zfs1 gpt/zfs2 # zfs create -o readonly=on zback/phe # zfs allow -u zap canmount,compression,create,mountpoint,mount,receive,refreservation,userprop,volmode zback/phe
Source host (phe)
Set permissions on the datasets to be replicated.
# zfs allow -u zap bookmark,diff,hold,send,snapshot zroot/ROOT # zfs allow -u zap bookmark,diff,hold,send,snapshot zroot/usr/home # zfs allow -u zap bookmark,diff,hold,send,snapshot zroot/var
Set the zap:snap user property to on for datasets to snapshot. The property will be inherited, so explicitly turn it off for datasets that we do not want to snapshot.
# zfs set zap:snap=on zroot/ROOT zroot/usr/home zroot/var # zfs set zap:snap=off zroot/var/crash zroot/var/tmp zroot/var/mail
Create the first snapshots.
# sudo -u zap zap snap -v 1w
Set the zap:rep property for datasets that will be replicated to bravo.
# zfs set zap:rep='zap@bravo:zback/phe' zroot/ROOT zroot/usr/home zroot/var # zfs set zap:rep=off zroot/var/crash zroot/var/tmp zroot/var/mail
Before replicating datasets, key-based ssh authentication to bravo must be set up. An important decision at this point is whether or not to protect the private key that will be generated with a passphrase. If anyone else has access to the source host, the decision is clear: add a passphrase. The downside to adding a passphrase is that you will have to enter it before you can replicate. The ssh-agent makes this more manageable.
# sudo -u zap zap@phe % ssh-keygen
Manually copy the contents of the newly created public key on the source host (~/.ssh/id_rsa.pub by default) to ~/.ssh/authorized_keys on the target host and confirm that you can ssh from the source host to the target host using key-based authentication.
Since this is the first time these datasets are being replicated to bravo, a full stream will be sent.
zap@phe % zap rep -v
Target host (bravo)
Optionally mount the backup datasets under /zback/phe.
# mkdir -p /zback/phe # zfs set mountpoint=/zback/phe zback/phe/ROOT/default # zfs mount zback/phe/ROOT/default # zfs set mountpoint=/zback/phe/var zback/phe/var # zfs mount zback/phe/var # zfs set mountpoint=/zback/phe/usr/home zback/phe/usr/home # zfs mount zback/phe/usr/home
Since I also use zap on bravo, I turn off the zap:snap and zap:rep properties for the backups.
# zfs set zap:snap=off zback/phe/ROOT zback/phe/usr/home zback/phe/var
Source host (phe)
Create new snapshots and send the incremental changes to bravo.
zap@phe % zap snap -v 1d zap@phe % zap rep -v
Automate rolling spanshots and replication with cron. Taking snapshots is normally cheap, so do it often. Destroying snapshots can thrash disks, so I only do it every 24 hours. Sensible replication frequencies can vary with different factors. Adjust according to your needs.
Edit the crontab for the zap user by adding the contents shown below.
zap@phe % crontab -e
#minute hour mday month wday command # take snapshots */5 * * * * zap snap 1d 14 */4 * * * zap snap 1w 14 00 * * 1 zap snap 1m # replicate datasets 54 */1 * * * zap rep -v
Edit the crontab for the root user by adding the contents shown below.
# crontab -e
#minute hour mday month wday command # destroy snapshots 44 04 * * * zap destroy