Using Voice to Code Faster Than Typing
August 13, 2013 at 07:30 AMIn a recent talk at PYCON, Tavis Rudd demonstrated how he overcame repetitive-stress injury (RSI) by coding with voice instead of typing.
His conventions for words that translate into the necessary keystrokes are ingenious and I wonder if they will become a voice-coding standard.
Permanent Link — Posted in Geek TacticsBest Practices For Using Arch Linux on Servers
December 06, 2012 at 12:00 PMI've been running Arch Linux on my workstations and on servers for a long time. Every once in a while I see a debate in an Arch Linux forum about it's suitability for use on production servers. Being a rolling release distribution, it is different than other distributions that concentrate on enterprise and long-term support like RedHat Enterprise and CentOS. Without getting too much into the pros and cons - one of the key reasons that I use Arch on servers is earlier access to newer technologies like the 3.0 Linux kernel series (with built-in xen support). Overall, though, it is due to my familiarity with and love for it. The OS that I load on my servers is there to support my applications. I find Arch is simple and light yet thorough and stable in getting the job done. If you are running Arch on servers or are interested in doing so, here are some practices that I recommend.
This is not meant to be an exhaustive list and there are different approaches to systems administration. I welcome feedback and discussion of these concepts. I have seen projects centered around creating an off-shoot of Arch to use on servers. Ultimately I think they miss the point. The idea is not to make Arch act like CentOS. With some simple tweaks to your deployment and management process, Arch is a fine distro to use on servers.
Dealing with "Rolling Release"
Many of these things apply not just to Arch, but any rolling release distribution. Recently, Arch Linux has gone through some fundamental changes in the base layers of the operating system like the network configuration and system initialization. Updating needs to be a regular and intelligent process. You can automate much of it but you really should do the base updates manually.
1. Have a server in each datacenter or cloud that acts as a "base" server for testing updates. Always have a server that you can test updates on. I use Linode, Rackspace, and Amazon EC2 clouds and I have dev servers in each of those that are there to test updates on so that I can work out any issues before updating mission-critical instances. Once you update the base server, you can image it out appropriately for your environment.
2. Keep Snapshots so that you can "roll back". This is one of those things you should be doing no matter what you are using.
3. Update often. I run updates weekly on my workstations and weekly or bi-weekly on my servers. With a rolling release distribution, the more out of date you are, the more work you have to do with each update. If you don't have a proper environment for testing updates, make an image of your server and run updates against that in something like Virtualbox.
4. Watch the News/Forums/Mailing Lists for Update Issues. I update my workstations first and run that for a few days before updating my servers - unless it is a critical security issue. Package updates fixing security issues should always be done as soon as is practically possible.
5. Don't Run Updates via Daemon or Cron. I do not recommend running system updates via cron. You just never know when an update will require more than just the basics. If you are pushing your own applications via custom repositories, those can be automated if appropriate. (See Custom Repositories below)
6. Script tricky updates. A quick bash script can make updates against servers very simple and painless. I typically run my updates from one spot. Configuration management tools can help here too. (See Configuration Management below)
7. Remove pacman from SyncFirst and HoldPkg in /etc/pacman.conf. The default pacman.conf will stop and prompt to update pacman if there is a new version of it before updating the system. For workstations this is fine, but for servers or when you are running scripted updates, this will get in the way. If you are updating your workstations first and the server last, you will know if you need to update pacman first.
8. Create scripts to bring machines current from your vendor's image. Ideally you are running your own images made from your own base instances, but if you are using the vendor's images - such as the "Arch Linux" installs from Rackspace or Linode, you should have a script that takes that image and brings it current. This script needs to be tested and updated regularly as part of your update cycle.
Understanding the Philosophy
The key to running any operating system on your servers is understanding it and the philosophy behind it. Arch Linux is a lot like python. The driving philosophy is simplicity. Many times people over-think or assume that a given task is harder than it is.
9. Run Arch as your daily driver. Nothing will bring your knowledge up like interacting with the system on a daily basis. Linux is a great deskop OS, give Arch a try.
Also see The Arch Way entry in the Arch Linux wiki.
Custom Repositories
10. Keep private repositories in their own conf file. Instead of the main pacman.conf file, you can create your own configuration file and call pacman with --config filename. This allows you to update the packages from your repo independently of system updates.
11. SSL and ACL protect private repos. I put my custom repos behind a classic ACL username and password. Yes this is present in the .conf file URL in plaintext, but I can always change the ACL if I find it gets compromised. Using SSL will protect it in transit. Of course this isn't foolproof so if you are concerned about your proprietary packages leaking, watch the logs or load the sensitive packages in a different way.
12. Sign your packages. Arch's package management system now supports package signing and verification.
13. Keep workstation and server repos separated. I build custom packages that I use on workstations and servers. I like to keep them in different repos so that the server repo stays as clean as possible.
Configuration Management
14. Configuration Management is your friend. If you are managing multiple servers, configuration management tools like Puppet, Chef, or CFEngine are your friend. Employing one of these tools properly will keep your servers consistent and greatly ease management and deployment.
Again, these practices are not meant to be all-encompassing. There are probably many other things that could go into this, but I hope sharing my approach can help others. The Arch Linux Wiki, Forums, and IRC Channels are always helpful resources.
Permanent Link — Posted in Arch Linux, Technology Management, Geek TacticsFind IP Addresses with awk
June 27, 2012 at 06:00 PMI needed to find an IP address amidst a bunch of random text. I googled and didn't find anything that worked for me the way I needed, so I made my own with awk. I thought I'd put it up here in case someone else could use it:
awk '{match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/); ip = substr($0,RSTART,RLENGTH); print ip}'
So you can pipe anything through this and it will grab the IP (V4) address(es).
Example:
$ ifconfig | awk '{match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/); ip = substr($0,RSTART,RLENGTH); print ip}'Permanent Link — Posted in Geek Tactics
Increase Amazon EC2 Reliability and Performance with RAID
May 25, 2012 at 06:00 PMWhile I haven't *knock on wood* had any EBS failures in Amazon's cloud myself, I have heard the horror stories and that makes me uneasy. Another issue with disks in cloud that I do run into a lot is latency. The disk io in many cases is slower to begin with, and random bouts of latency tend to crop up.
I have addressed both of these problems by deploying RAID 10 on my Amazon EC2 instances. It sounds techie but you don't have to be a rocket scientist to do this. If you are managing an EC2 instance you can do it and I have published a script that will get you there in a few steps.
First you need to have the ec2-api-tools installed and working on a machine. This can be a server but you can also do this on your workstation. For Arch Linux users, there is a package in the AUR.
The key to getting those tools working is setting up your environment variables. I use a little script called awsenv.sh like this:
#!/bin/bash export AWS_USER_ID="0349-01234-09134" export AWS_ACCESS_KEY_ID="BLAHDEBLAHBLAHBLAH" export AWS_SECRET_ACCESS_KEY="somecharsthatmeansnothing" export EC2_PRIVATE_KEY="/path/to/EC2-key.pem" export EC2_CERT="/path/to/EC2-cert.pem"
Call it with: $ source awsenv.sh
Now you're ready to grab my script from: https://github.com/bparsons/buildec2raid
Once you have the api tools working, using the script is really easy:
Example:
$ ./buildec2raid.sh -s 1024 -z us-east-1a -i i-9i8u7y7y
This example would create a 1TB (terrabyte) array in the us-east-1a availability zone and attach it to instance i-918u7y7y.
The script does the basic RAID math for you. It uses 8 disks but you can change the DISKS variable near the top of the script if you prefer another topology. I really suggest that you use RAID 10. That way you can pull a slow EBS volume out of your array and then replace it without much hassle.
Once the volumes are created and attached to the instance, you log into the instance and initialize the array:
$ mdadm --create -l10 -n8 /dev/md0 /dev/xvdh*
That starts the array up. Then all you have to do is format it. Here is an XFS example:
$ sudo mkfs.xfs -l internal,lazy-count=1,size=128m -d agcount=2 /dev/md0
If you are new to software RAID you will find it helpful to check out the Linux RAID Wiki
Dont forget to add the mountpoint to your /etc/fstab file and create the /etc/mdadm.conf file:
# mdadm --examine --scan > /etc/mdadm.conf
Update Amazon Route53 via python and boto
April 18, 2012 at 08:00 AMI wrote a python script to update DNS on Amazon Route53. You can use it on dynamic hosts by putting it into cron, or on boot for cloud instances with inconsistent IP addresses.
It uses the boto Amazon Web Services python interface for the heavy lifting. You'll need that installed. (Arch Linux has a python-boto package)
You need to edit the script to place your AWS credentials in the two variables near the top of the script (awskeyid, awskeysecret). Then it's ready to go.
You can specify the hostname as an argument on the command line:
updatedns.py myhost.mydomain.com
...or it will try and resolve the hostname itself.
You can download the script here, or from github.
Permanent Link — Posted in Cloud Computing, Geek Tactics, Amazon Web Services