BUILD AN EXPLORER ROBOT Part 1 of the complete 4-part guide starts this issue www.linuxuser.co.uk
SECURITY KIT
LIVE-BOOTING DVD
BEAT THE HACKERS
Penetration testing with Raspberry Pi Turn your Pi into the ultimate security tool
Compile new software
Get new FOSS without updating your distro
24
pages of
Raspberry Pi
SECURE YOUR
SYSTEM The 7 full programs you need to lock down your PC
Use the Pi to visualise music in Minecraft
Take control of Bash Learn how to use shell scripts to save time
• Kali Linux • IPFire • OpenSSH • Snort • chkrootkit • Wireshark • nmap
Serve files across a network by building and optimising your own storage device
VPNs on test Also inside ISSUE 167
Play musical Minecraft
What’s the best network for your privacy needs?
» IPROUTE2 explained » Make a Ras Pi picture frame » How to work with Go functions
Black HAT Hack3r
Get more from your Pi and HATs with this add-on
Welcome
THE MAGAZINE FOR THE GNU GENERATION
Imagine Publishing Ltd Richmond House, 33 Richmond Hill Bournemouth, Dorset, BH2 6EZ ☎ +44 (0) 1202 586200 Web: www.imagine-publishing.co.uk www.linuxuser.co.uk www.greatdigitalmags.com
to issue 167 of Linux User & Developer
Magazine team Editor April Madden
april.madden@imagine-publishing.co.uk ☎ 01202 586218 Production Editor Rebecca Richards Designer Rebekka Hearl Photographer James Sheppard Senior Art Editor Andy Downes Editor in Chief Dan Hutchinson Publishing Director Aaron Asadi Head of Design Ross Andrews
This issue
Contributors Dan Aldred, Joey Bernard, Toni Castillo Girona, Christian Cawley, Kunal Deo, Alex Ellis, Gareth Halfacree, Tam Hanna, Oliver Hill, Jon Masters, Paul O’Brien, Swayam Prakasha, Richard Smedley, Nitish Tiwari, Mihalis Tsoukalos
Advertising
Digital or printed media packs are available on request. Head of Sales Hang Deretz ☎ 01202 586442 hang.deretz@imagine-publishing.co.uk Sales Executive Luke Biddiscombe ☎ 01202 586431 luke.biddiscombe@imagine-publishing.co.uk
FileSilo.co.uk
Assets and resource files for this magazine can now be found on this website. Support filesilohelp@imagine-publishing.co.uk
International
Linux User & Developer is available for licensing. Head of International Licensing Cathy Blackman ☎ +44 (0) 1202 586401 licensing@imagine-publishing.co.uk
Subscriptions
For all subscriptions enquiries LUD@servicehelpline.co.uk ☎ UK 0844 249 0282 ☎ Overseas +44 (0) 1795 418661 www.imaginesubs.co.uk Head of Subscriptions Sharon Todd
Circulation
Circulation Director Darren Pearce ☎ 01202 586200
Look for8 issue 16ly on 28 Ju
Production
Production Director Jane Hawkins ☎ 01202 586200
Finance
er? Want it soon
Finance Director Marco Peroni
Founder
Group Managing Director Damian Butt
Printing & Distribution
e Subscrib ! y toda
Printed by William Gibbons, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT Distributed in the UK, Eire & the Rest of the World by: Marketforce, 5 Churchill Place, Canary Wharf London, E14 5HU ☎ 0203 148 3300 www.marketforce.co.uk Distributed in Australia by: Gordon & Gotch Australia Pty Ltd 26 Rodborough Road Frenchs Forest, New South Wales 2086, Australia ☎ +61 2 9972 8800 www.gordongotch.com.au
Disclaimer
The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Imagine Publishing Ltd. Nothing in this magazine may be reproduced in whole or part without the written permission of the publisher. All copyrights are recognised and used specifically for the purpose of criticism and review. Although the magazine has endeavoured to ensure all information is correct at time of print, prices and availability may change. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to Imagine Publishing via post, email, social network or any other means, you automatically grant Imagine Publishing an irrevocable, perpetual, royalty-free license to use the material across its entire portfolio, in print, online and digital, and to deliver the material to existing and future clients, including but not limited to international licensees for reproduction in international, licensed editions of Imagine products. Any material you submit is sent at your risk and, although every care is taken, neither Imagine Publishing nor its employees, agents or subcontractors shall be liable for the loss or damage.
© Imagine Publishing Ltd 2016
» Secure your system » Pen testing with Raspberry Pi » Play musical Minecraft » Build your own NAS Welcome to the latest issue of Linux User & Developer, the UK and America’s favourite Linux and open source magazine. You know that old adage about setting a thief to catch a thief? It’s true, especially when it comes to security. It’s all very well setting up a fence, but for maximum security you need to start thinking like a hacker. Don’t stockpile bitcoins in case your data gets locked down by crypto ransomware; think about what data is important and bury it somewhere so deep and encrypted and offline that it makes The Pit look like a holiday camp. Don’t assume that passive guards are enough to protect systems and networks; get a vuln scanner running, have a packet sniffer lurking, do some pen-testing, try to break your own security. Clue: if you succeed, you’re doing it wrong. There’s no 100 per cent proof security solution out there, but our disc this issue comes pretty close. Packed with two live-booting distros, Kali and IPFire, plus a range of tools, it’ll help you to lock down your system and network, and to check how secure it is. Remember, every good security tool is also a hacking tool, so use them wisely and well. You can find out how in our complete guide on p18, and you’ll learn how to use your Pi for pen-testing on p56 too. Stay safe out there… April Madden, Editor
Get in touch with the team: linuxuser@imagine-publishing.co.uk Facebook:
Linux User & Developer
Twitter:
Buy online
@linuxusermag
Visit us online for more news, opinion, tutorials and reviews:
www.linuxuser.co.uk
ISSN 2041-3270
www.linuxuser.co.uk
3
Contents e Subscrib! e v & sa r 30
Reviews 81 VPNs
Keep your internet activity secure and private with these VPN clients
t ou Check ou fer! of great new ers om US cust ibe cr can subs on page 80
18 Secure your system
PrivateTunnel
IPVanish
TunnelBear
CyberGhost
Take complete control of system and network security
OpenSource Tutorials 08 News
32 Bash masterclass: Back to basics
12 Interview
36 Build your own Linux-powered network-attached-storage
The biggest stories from the open source world Chris Buechler of pfSense talks firewalls
16 Kernel column
The latest on the Linux kernel with Jon Masters
94 Letters
Your questions answered, from tech help to advice
Take control of Bash shell scripting and use its powerful timesaving features
Linux is the perfect OS for to run your own home NAS
40 Networking and traffic control with iproute2
Make networking easier and learn how to monitor and shape traffic
44 Run new software on old distros
86 Black HAT Hack3r
The essential debugging HAT from Pimoroni put to the test
88 Xubuntu 16.04
Xubuntu has a great track record, but does this update miss the mark?
90 Free software
Richard Smedley recommends some excellent FOSS packages for you to try
Explore the GNU Make build-system and learn how to compile new FOSS on old distros
48 Learn Go: Explore Go Functions
How to develop and use Functions in the Go programming language
Features
Resources
18 Secure your system
55 Practical Raspberry Pi
The ultimate FOSS for securing systems and networks
56 Pen test with Pi
Discover how to use your Pi to test your system security
96 Free downloads
Find out what we’ve uploaded to our digital content hub FileSilo for you this month
Get started with our Explorer robot series, learn how to visualise music in Minecraft, build a Pi-powered picture frame and other inspirational maker projects
Join us online for more Linux news, opinion and reviews www.linuxuser.co.uk 4
DOMAINS | WEBSITES | HOSTING | CLOUD SERVERS
SAFEST OF THE
SAFE! INCLUDED
HOSTING? ONLY THE SAFEST!
SSL CERTIFICATE
HIGHEST SECURITY EXCLUSIVE TO 1&1!
1&1 provides the highest standard of protection available! Show your online visitors that their security is your top priority with: SSL Certificate included Geo-redundancy Certified data centres
1
TRIAL TRY FOR 30 DAYS
1
MONTH SHORT TERM CONTRACTS
DDoS protection
1
CALL
SPEAK TO AN EXPERT
0333 336 5509 Visit 1and1.co.uk for full product details, terms and conditions. 1&1 Internet Limited, Discovery House, 154 Southgate Street, Gloucester GL1 2EX.
1and1.co.uk
Open Source On the disc
On your free DVD this issue Find out what’s on your free disc Welcome to the Linux User & Developer DVD. This issue we help you lock down your system and network with the best security tools a sysadmin can have. From vulnerability scanning to packet
sniffing, this complete toolkit has everything you need to protect your data and test your security. Take control of your system and banish security risks for good!
Featured software:
Kali Linux
The ultimate security testing distro for Linux users will help you to make your system and network watertight. This distro live boots from the disc or can be installed onto your system.
chkrootkit
Got root? Don’t get rooted. Check for signs of a rootkit infesting your files.
nmap
Nmap is for network discovery and security auditing. Many systems and network administrators find it useful.
ipFire
A professional and hardened Linux firewall distribution that is secure, easy to operate and has great functionality ready for enterprises, authorities, and anybody else. This distro installs from live boot; ensure that you have backed up your data before partitioning and installing.
Snort
Sniff out security risks over your network with the complete intrusion detection system.
OpenSSH
OpenSSH is the premier connectivity tool for remote login with the SSH protocol. It encrypts all traffic to eliminate attacks.
Wireshark
Wireshark is the world’s foremost network protocol analyser. It lets you see what’s happening on your network at a microscopic level.
6
Load DVD
To access software and tutorial files, simply insert the disc into your computer and double-click the icon.
Live boot
To live-boot into the distros supplied on this disc, insert the disc into your disc drive and reboot your computer.
Please note: • You will need to ensure that your computer is set up to boot from disc (press F9 on your computer’s BIOS screen to change Boot Options). • Some computers require you to press a key to enable booting from disc – check your manual or the manufacturer’s website to find out if this is the case. • Live-booting distros are read from the disc: they will not be installed permanently on your computer unless you choose to do so.
For best results: This disc has been optimised for modern browsers capable of rendering recent updates to the HTML and CSS standards. So to get the best experience we recommend you use: • Internet Explorer 8 or higher • Firefox 3 or higher • Safari 4 or higher • Chrome 5 or higher
Problems with the disc? Send us an email at linuxuser@ imagine-publishing.co.uk Please note however that if you are having problems using the programs or resources provided, then please contact the relevant software companies.
Disclaimer Important information
Check this before installing or using the disc For the purpose of this disclaimer statement the phrase ‘this disc’ refers to all software and resources supplied on the disc as well as the physical disc itself. You must agree to the following terms and conditions before using ‘this disc’:
Loss of data
In no event will Imagine Publishing Limited accept liability or be held responsible for any damage, disruption and/or loss to data or computer systems as a result of using ‘this disc’. Imagine Publishing Limited makes every effort to ensure that ‘this disc’ is delivered to you free from viruses and spyware. We do still strongly recommend that you run a virus checker over ‘this disc’ before use and that you have an up-to-date backup of your hard drive before using ‘this disc’.
Hyperlinks:
Imagine Publishing Limited does not accept any liability for content that may appear as a result of visiting hyperlinks published in ‘this disc’. At the time of production, all hyperlinks on ‘this disc’ linked to the desired destination. Imagine Publishing Limited cannot guarantee that at the time of use these hyperlinks direct to that same intended content as Imagine Publishing Limited has no control over the content delivered on these hyperlinks.
Software Licensing
Software is licensed under different terms; please check that you know which one a program uses before you install it.
Live boot
Distros
Insert the disc into your computer and reboot. You will need to make sure that your computer is set up to boot from disc
FOSS
Insert the disc into your computer and double-click on the icon or Launch Disc file to explore the contents
Distros can be live booted so that you can try a new operating system instantly without making permanent changes to your computer
Explore
Alternatively you can insert and run the disc to explore the interface and content
• Shareware: If you continue to use the program you should register it with the author • Freeware: You can use the program free of charge • Trials/Demos: These are either time-limited or have some functions/features disabled • Open source/GPL: Free to use, but for more details please visit https://opensource.org/licenses/ gpl-license Unless otherwise stated you do not have permission to duplicate and distribute ‘this disc’.
www.linuxuser.co.uk
7
08 News & Opinion | 12 Interview | 94 Letters | 96 FileSilo KERNEL
Finally! Orange FS and cgroup get support in 4.6 The latest Linux kernel update unveils some frequently requested features After two months of development, the highly anticipated 4.6 release of the Linux kernel is now available for download. Through his release notes, Linus Torvalds discussed that this release was bigger than many previous updates, with an array of new features that have been frequently requested by users. “The 4.6 kernel on the whole was a fairly big release – more commits than we’ve had in a while. But it all felt fairly calm despite that,” noted Torvalds. One said feature is the new distribution file system, named OrangeFS. While it’s comparable to many leading system managers currently out there, it also boasts improved support for 802.1AE MACsec encryption and more reliable out-of memory handling. In use, it provides an even safer user experience, without over-complicating any core security functions. Also at the core of Linux 4.6 ‘Charred Weasel’ is the improved integration of cgroup namespaces. The way in which the kernel has often dealt with container spaces has been criticised, but with the addition of namespaces, processes can be isolated more easily. There’s then a platform in place
The way in which the kernel has often dealt with container spaces has been criticised, but with the addition of namespaces, processes can be isolated more easily
8
Above Support for ARM-based SoCs will help integration between Linux and external hardware
to virtualise the view of any mounted process using cgroup. In a recent interview with linux.com, Linux Foundation fellow Greg Kroah-Hartman discussed that these changes only scrape the barrel of what the new update includes. “We’re running almost eight changes an hour, 24 hours a day, to our kernel,” he said. “It’s one thing to take these bug fixes, ten bug fixes a day, but you also need to take advantage of the new features. We’re adding new features for security reasons. We’re adding these airbags to the kernel.” While both OrangeFS and cgroup namespaces will grab the headlines, embedded machines are also getting a noticeable boost in the 4.6 update. Support for 13 new ARM-based SoCs has been integrated, with the likes of LG and Qualcomm at the top of list of manufacturers. For the casual user, this should lead to improvements
with a number of compatible devices, including the Raspberry Pi nano-computer and high-end Wi-Fi routers. 64-bit ARM architecture is also getting some much needed TLC with a number of small, incidental performance enhancements that help speed up several key areas of the desktop. It’s believed that the next development release of the kernel will look to expand on the current ARM integration. For a full rundown on the changes in Linux 4.6, head across to https://lkml.org/ lkml/2016/5/15/83. It should also be noted that the early code for the early Linux 4.7rc-1 is freely available, with numerous features and improvements, including a new HiSilicon DRM driver, DisplayPort++ dongle detection for HDMI adapters and support for AMD’s upcoming range of GPUs. A full, final release of Linux 4.7 isn’t expected until August, so don’t get too excited just yet.
TOP FIVE
RASPBERRY PI
Raspberry Pi Kickstarter projects to get excited about 1 Hybrid Tube Amp
Raspberry Pi Zero gains camera connector Upgrade your photography game with the Raspberry Pi’s new connector Upon the initial release of the Raspberry Pi Zero back in November, both critics and users were quick to point out the hardware limits of the £5 computer. However, the team over at Raspberry Pi labs has recently announced the first big hardware upgrade for the Pi Zero: a camera connector. The announcement of the accomplishment coincides well with the news of the Sony IMX218 8-megapixel sensor that has been made available for the full-size Raspberry Pi units. However, it’s now confirmed that it will also work with the Pi Zero. In a blog post by Raspberry Pi founder Eben Upton, he explains that it was the slowdown in production of the original shipment of Pi Zeroes that led to the development: “Every time a new batch of Zeroes came through from the factory they’d sell out in minutes. To complicate matters, Zero then had to compete for factory space with Raspberry Pi 3, which was ramping for launch at the end of February.” He continues: “Happily, we were able to take advantage of the resulting production
hiatus to add the most frequently demanded “missing” feature to Zero: a camera connector. Through dumb luck, the same fine-pitch FPC connector that we use on the Compute Module Development Kit just fits onto the right-hand side of the board.” Owners of a Raspberry Pi-enabled camera module itching to connect their module to the Zero will require a custom six-inch adapter cable. This cable – available officially through the RasPi.org site – converts from the traditional FPC format to the coarser pitch used by the camera board. The connector is a tiny 3.5mm, smaller than the adapter used for the Pi 3, but it’s the reduced size that adds the capabilities to the Pi Zero. Unlike other models of the Pi, the camera connector here is on the edge of the board, with the ribbon extending from the side, instead of the top. Ultimately this helps keep the profile of the Pi Zero as low as possible. If you still haven’t got your hands on a Pi Zero, head across to raspberrypi.org for all the information on online stockists.
Meet the first Tube Amp specifically designed for any of the 40-pin versions of the Raspberry Pi. It includes a 24-bit DAC, capable of converting and then streaming music at 192KHz, while an on-board 3.5mm jack makes it easy to connect your favourite pair of headphones and listen to your tunes.
2 Audio Injector Sound Card
This Pi board and sound card hybrid adds both audio input and output to your Pi. There are physical modules for seamless control and it removes the need for complicated audio control software. Thanks to its clever build, the Audio Injector can also be used in existing audio setups.
3 RasPiO Analog Zero
The Analog Zero fits any 40-pin Raspberry Pi, adding full analog input technology to the board. Users can experiment with the Analog Zero to make projects like a weather station and thermometer, or convert it to become a safe and accurate voltmeter.
4 RTK.GPIO
A relatively simple to use piece of kit that adds a Pi GPIO header to any desktop computer or laptop. Together with the accompanying software, users will be able to manage their Raspberry Pi equipment with it, or use the device to add an extra GPIO header to their Pi.
5 UDOO X86
Okay, so the UDOO X86 will be competing against the Raspberry Pi 3, but it claims to have ten times its power. It includes an Intel Quad-Core processor, six-axis accelerometer, gyroscope and Bluetooth Low Energy connectivity.
www.linuxuser.co.uk
9
OpenSource
Your source of Linux news & views
STEAM
Linux’s Steam integration has never been better Users can now run the Steam client in 32-bit mode While Steam has long been the best way for Linux users to have access to a huge library of Linux-compatible games, its numerous faults have been widely documented both officially and through a number of user forums. One of the key problems has been trying to get Steam to run in 32-bit mode, but a new tool may have just fixed the issue. Linux Steam Integration was created by the
While the LSI tool is a start, it’s likely we’ll see further developments from the Solus team to improve Steam on Linux
Above Get Steam to run in 32-bit mode on your Linux machine
team behind the highly successful Solus OS, with the sole aim of solving the issue of the Steam runtime not working correctly on common distributions. In an official statement, founder Ikey Docherty said: “Linux Steam Integration, or LSI, is a configurable shim I’ve developed to solve the issue of the Steam runtime. With this shim, one may force Steam to run in 32bit mode, to combat issues such as seen with the latest CS:GO 64-bit update, as well as to enable or disable the Steam runtime at will.” The project is open-source, with all the necessary repositories included in the Solus
distro ready to be installed, examined, and enjoyed. But it also works on many other leading distributions, and all the necessary documentation for installation can be found through the official Solus site. While the LSI tool is a start, it’s likely we’ll see further developments from the Solus team to help improve the usability of Steam on Linux. “This will be expanded in future to address further limitations in the Steam client, in order to bring per-game runtime configuration settings, as well as steadily removing Steam’s requirement for its own SDL libraries,” added Doherty.
WEB
Amazon debuts new runtime application model The highly regarded Amazon Web Service has announced a new open-source project aimed creating a full, serverless runtime application environment that can be of benefit to app developers, coding enthusiasts and end users. Named Flourish, the project looks to highlight the simplicity of serverless environments by showing how requests can be scaled so as to not misuse server capacities. Other benefits include developers being able to code in their preferred language, thanks to the runtimes being open, and app providers not having to pay for unused computing power.
10
Speaking at the ServerlessConf in Brooklyn, Tim Wagner, GM of AWS Serverless Compute, said: “We are now in the frameworks and dev tools stage where the ecosystem is growing and enabling companies to start their own serverless journey. Through Flourish, in the next stage, we want to get to a mass of developers where there is a self-fulfilling cycle that is driving vibrant ecosystems.”
We are going to need more solutions at the application-inspired level
Wagner went into detail about how Flourish would work for developers, being partnered with ecosystems and all necessary repositories readily available for download on GitHub, but he is also quick to point out that there’s still a way to go yet, and the concept will need a lot of outside help: “As we go up the stack, we are going to need more solutions at the application-inspired level, for example, splitting up a video, and running facial analysis. We will need scatter-gather paradigms.” Flourish will be launched as a project on GitHub in the coming weeks, where documentation will be freely available for anyone to access.
UBUNTU
Portable applications now available in Ubuntu 16.04 An all-new type of application package is available for Ubuntu users Portable applications have carved a niche market for themselves within certain Linux groups for being lightweight and notoriously quick to run, but until now, they’ve failed to hit the mainstream market due to certain compatibility issues and installation problems for some users. Portable app pioneer Orbit Apps may have broken this mould, thanks to the announcement
that its ORB portable app packages are now fully compatible with Ubuntu 16.04. We can also confirm that all flavours of Ubuntu will work with portable apps, including Lubuntu, Xubuntu and the increasingly popular UbuntuMATE. ORB is an open-source package format, enabling users to use apps that can be run on-the-fly, without the need for administrator privileges. Through the free ORB Launcher software, ORB also grants capabilities for users to transfer these specialised apps to USB sticks, including all saved settings and data. Then it’s a case of connecting the USB stick with any system running Ubuntu 16.04 and your apps will be good to go with no installation necessary. Currently, there are 37 portable applications available as portable packages, including the likes of Firefox, VLC and LibreOffice to name just a few. Over the coming months it’s believed that this number will be close to the threefigure mark.
GIT
Git 2.8.3 update ushers in improved command support
Over 20 improvements in the latest Git update Git, the popular source code management system, has received its third point release. Version 2.8.3 shows numerous bug fixes and over 20 improvements to the stable 2.8 branch that is being used by the majority of the program’s 12-million strong user base. Within the update, users will find fixes and enhancements to many core Git commands, including git send-email, git push, git submodule, ‘git config’, ‘git replace’ and ‘git format-patch- help’. On top of this, there’s some noticeable performance improvements,
with better support for the CRAM-MD5 authentication method on-board and ready to use. There’s also an array of small improvements to many of Git’s core test scripts and submodules, which help eradicate many of the problems users were having. Perhaps what will catch the eye of avid Git users are the changes to integrate with the recently released OpenSSL 1.1.0 update; allowing for better, more robust security that’s also highly customisable. For a full changelog for Git 2.8.3, head across to git.scm.com.
DISTRO FEED
Top 10
(Average hits per day, 30 April – 31 May) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Linux Mint Debian Ubuntu Manjaro openSUSE Fedora CentOS Zorin Elementary Arch
2,923 1,926 1,596 1,231 1,213 1,182 1,009 918 849 782
This month ■ Stable releases (16) ■ In development (9) The CentOS 6.8 release has been received well by users. It features a number of fresh changes, including a combined repository for installation.
Highlights ElementaryOS
The lightweight Ubuntu-based desktop has carved a following for its minimal take on the traditional desktop formula. Its desktop environment, Pantheon, is one of the best options out there for those who really like to customise core features.
Zorin
Zorin is still riding the crest of the wave thanks to its OS 11 update a few months back. Its range of pre-installed applications now include a new contacts manager and an alternative to VLC.
Manjaro
Linux enthusiasts are itching to get their hands on Manjaro’s upcoming 16.06 update, promising a new graphical configuration module for managing kernels and an improved package manager.
Latest distros available: filesilo.co.uk
www.linuxuser.co.uk
11
OpenSource
Your source of Linux news & views
INTERVIEW CHRIS BUECHLER
pfSense
Just how safe is your desktop? Chris Buechler discusses why pfSense is the leading firewall distribution for securing both large and small networks
Chris Buechler
Chris Buechler is one of the founders of the pfSense project. Not only has he been an instrumental part in the development of the distribution, but he’s also written pfSense: The Definitive Guide; a must-have book for budding pfSense users.
Could you give us an overview of what pfSense is? What are its key features? With over 350,000 active users, pfSense is a popular open source firewall and router distribution that helps you secure networks, both large and small. IT administrators familiar with commercial firewalls will quickly understand how to configure all of the packages and components using the web interface. Those who are just getting started with security will love the short learning curve and rich, easily understood feature set. One of the key features of pfSense is its stateful firewall. The stateful firewall maintains information on open network connections. Monitoring the state of each connection allows all kinds of granular control on a per-rule basis, and provides multiple options to secure a wide variety of protocols. Finally, it provides tools to support connectivity goals, such as high latency links (think satellite connections), limiting the number of simultaneous client connections, limited memory configurations, etc. Other features of pfSense include Network Address Translation (NAT), high availability configurations, including syncing state between multiple instances,
12
multi-WAN and load balancing, VPN connectivity with IPsec and OpenVPN, a PPPoE server, dynamic DNS, DHCP server and relay, and of course expansive reporting and monitoring. We often joke that pfSense is the kitchen sink of firewall and router distributions. pfSense is also extensible. There are several free packages that are available to help you roll-your-own IDS/IPS server, set up a transparent proxy server, block IP addresses using BGP feeds, and more. The most popular packages are Snort, Squid, and SquidGuard.
How does pfSense compare to other leading commercial firewalls? Does it do things differently? Of course, every commercial firewall is slightly different. The functions and features of pfSense are on par with all commercial firewalls, but when you dig a little deeper there are two significant differences. The first of these is that total cost of ownership is lower with pfSense. Since pfSense is open source, there are none of the annual licensing fees traditionally demanded by commercial firewall vendors. The other differentiator is the open source community that has formed around pfSense. This community is full of incredibly smart people who like to figure things out, and then share what they’ve learned. The pfSense forum, documentation wiki and YouTube are just three of the goldmines of information available for and about pfSense. How customisable is the software? Can users tailor it to their own hardware? Since pfSense is open source, things are very customisable for those who want to raise the hood. Several large companies have incorporated pfSense in their own commercial products to enable access control, VPN tunnelling, load balancing and multi-WAN bonding. On a more individual level, with the release of pfSense version 2.3, graphical themes are based on CSS so you can personalise the look of the web GUI as much or as little as you want. Beyond tailoring the look and feel, by tailoring the built-in options and available packages, you can mix and match to meet your specific firewall needs. Configuration is easy and there are wizards to help you with most of what you’ll need while getting started. You don’t have to be a hacker or IT god to use pfSense. End users can run pfSense on a wide variety of x86 platforms, from repurposed PCs to modern servers. We have recently tweeted out previews of our new hardware platform for very small installs, and you’ll also see some new large Xeon servers on our website. As our customers have different requirements, we provide a range of desktops and servers to match. Are there any specific hardware requirements for running the software? For small installations and home users, we recommend a 64-bit Intel CPU with a minimum of 1 GB RAM. We recommend increasing these resources as your demands increase. Overall system requirements depend on the amount of traffic flowing through the system, how many simultaneous connections are required, the severity and types of external attacks which are occurring, which features are in use and which additional packages, if any, are loaded. We are happy to provide a recommendation based on specific requirements, just get in touch. We develop and test primarily on Intel C2000 (Rangeley Atom) systems sold by Netgate. The pfSense software available on Netgate systems is commercially
Choosing the right hardware By tailoring the built-in options and available packages, you can mix and match to meet your specific firewall needs
While the pfSense software can be installed on a wide range of existing hardware, in its current state it’s currently supported only on x86 and x86-64 architecture. With this in mind, it’s certainly worth checking out the hardware built by pfSense itself. The top of the crop, and one of pfSense’s bestsellers, is the SG-2440 Security Gateway appliance. It features a dual-core Intel Atom processor, with Intel’s QuickAssist technology also on boards to help support a high level of I/O throughput and optimal performance per watt. For the casual user, the SG-2550 can be used to completely configure or rebuild an existing firewall, LAN or WAN router. Plus, with seamless connections to anyone in your business, it’s easy enough to use the pfSense software to closely monitor your SG-2440 and quickly highlight anything malicious it manages to quarantine.
supported. Netgate systems are made in the USA and have AES-NI capabilities for faster VPN connections. These systems are deployed on all seven continents in all kinds of situations. (Yes, even Antarctica!) We also have cloud and virtual options for Amazon AWS, Microsoft Azure, and VMware. AWS and Azure are free to get started and these options can provide cost-effective site-to-site tunnels, from businessto-cloud and even inter-region between Amazon or Azure regions. You also list certain recommended hardware through the pfStore, are they specially tailored to work with the pfSense software? Yes. We have custom-built hardware tuned to maximise the capabilities of pfSense. The hardware is designed for high reliability, low noise, and low power consumption while taking advantage of all that pfSense has to offer. Buying directly from us or our worldwide
www.linuxuser.co.uk
13
OpenSource
Your source of Linux news & views
needs network security software. This is our first real effort on ARM, and we are very excited. We are targeting September 2016 for availability. How secure is the software? How efficient is the software at finding vulnerabilities and removing them? Let me start by saying that the pfSense team isn’t new to securing networks. The project started over twelve years ago, and has been evolving ever since. Not only do we have a team of developers with decades of experience in operations, networking and security, we also have a large, worldwide community constantly reaching for new solutions to stop the ever-evolving threats. The pfSense project is always cutting edge because of this community and their contributions. All of this adds up to a great firewall that is best-in-class at detecting and blocking threats to your network.
Go back to school
pfSense is so vast in its feature set that the team behind it has even launched its own tuition program to help budding users to get to grips with the advanced distribution. pfSense University offers an array of in-depth courses to help increase your overall knowledge of their products and services. Whether it’s in the form of improving your knowledge of security procedures, or learning the basics of network protection, you’ll find a course for you. Through the official pfSense site, users can find a full curriculum of each course and an enrolment sheet to get started.
partner network is the fastest way to get your firewall up and running with pfSense. Since the platforms we sell are the platforms we use in our test environment, this also ensures your exact hardware has been tested by us in advance of every release. The pfSense professional support and engineering team along with our partner network of VARs, MSPs and resellers are all focused on working with our customers to provide professional solutions to network security problems. We’re so excited to have partners around the world who understand both local requirements and the capabilities of both pfSense software and the Netgate platforms. It’s an ideal combination. The newest Netgate platform is a micro-firewall with two gigabit Ethernet connections and an ARM chipset. We’re pulling out the stops to lower the price point for a world class firewall that is within reach of everyone who
14
The newest Netgate platform is a microfirewall with two gigabit Ethernet connections and an ARM chipset
Could you give us an insight into your training program, pfSense University, and what that entails for those interested? We offer two days of in-depth instruction either online or in a classroom setting. Both settings offer realworld scenarios with hands-on labs in a live network environment. These classes are taught by network professionals with years of experience to facilitate the best possible learning experience. The class is split into nine sections, each filled with theory, best practices, and detailed configuration instructions, and followed by individual labs where each student can put into practice what they just learned. Each student has their own dedicated lab environment, simulating a typical headquarters network including high availability multiWAN, and a LAN and DMZ, a branch office, and a host external to the network on the internet. We cover a lot of ground in two days! More information about our training can be found at https://pfsense.org/university/. There is also an amazing amount of community-driven information on the net. There are “how-to” recipes, videos, an active sub-reddit and of course our forum at https://forum. pfsense.org/ with information on every topic, from traffic shaping to OpenVPN to high availability failover configurations. What is one thing you would say to someone who has never tried pfSense? Download it and give it a try! While we add a few wizards into the version we install on the Netgate systems, the free-to-download Community Edition has all the functionality of the factory version. You can kick the tyres and take it out for a test drive, with no obligation or commitment. You can also use the free version on Amazon AWS or Microsoft Azure. You don’t have to tell us your name or give us an email address to download it. There’s no license or activation code. So what have you got to lose? Try it out, join the pfSense community, and contribute to making networks more secure.
OpenSource
Your source of Linux news & views
OPINION
The kernel column
Jon Masters summarises the closing of the Linux 4.7 development merge window and ongoing work toward new features for future kernels
Jon Masters
is a Linux-kernel hacker who has been working on Linux for some 19 years, since he first attended university at the age of 13. Jon lives in Cambridge, Massachusetts, and works for a large enterprise Linux vendor, where he is driving the creation of standards for energy efficient ARM-powered servers
16
Linus Torvalds announced the first Release Candidate of what will become the 4.7 Linux kernel. According to Linus, the new kernel “isn’t a huge release” but is “certainly big enough”. What Linus means is that, while 4.7 doesn’t include whole new subsystems (such as a new filesystem, or architecture port), or major disruptive feature additions, it does include the usual mix of driver changes, architecture updates, networking, and “misc” features. The latter happens to include a fairly significant – but internal to the kernel – set of changes to the VFS (Virtual FileSystem layer) of interest to kernel developers working on filesystems. With the release of Linux 4.7 comes the implied closing of the ‘merge window’ – the period of time during which disruptive changes are allowed to the kernel in any development cycle – and the inevitable period of stabilisation prior to final release. Deciding to ‘spice things up’, Linus released the RC1 kernel earlier in the day than his customary practice of late Sunday afternoon Pacific time. This was to intentionally catch out anyone planning to sneak in last-minute patches, and to encourage subsystem maintainers to post their proposed changes earlier in the merge window next time around. Linus subsequently bent his own rules of the road with the release a week later of 4.7-rc2, in which he mentioned that he had pulled in a last-minute cleanup (to the pseudo terminal code) for which he “would probably have shouted at some submaintainer that tried to call that cleanup a late fix”. The fix Linus mentioned was entitled “Kill the DEVPTS_MULTIPLE_INSTANCES config option”. It relates to how the kernel handles pseudo-terminals (ptys), which are literally the output devices used for non-graphical programs. Every time you remote login to a server via ssh, you open a new pty to run your interactive shell with that remote system. These terminals are represented by files in the /dev/pts directory on the server. This directory doesn’t really exist on disk (it’s an in-memory pseudo-filesystem, representing information about available ptys on the system) and its contents are dynamically populated based upon mounting /dev/pts at boot time. The kernel used to have explicit special handling that ensured
each time this pseudo-filesystem was mounted, a “distinct instance” was created in order to prevent bad interactions between similar mounts set up for containers, in which container mount permission changes could impact the host. The late fix removes the need for this special kernel config option. If things remain on track, we should see the release of Linux 4.7 very soon after next month’s issue. We’ll have a full summary of the remainder of Linux 4.7, as well as upcoming features targeting 4.8, next time.
CPU Frequency Governors
One of the new features called out in Linux 4.7 is a new CPU performance (frequency scaling) “governor”, known as “schedutil”. CPU Governors are collections of software algorithms (contained within a loadable kernel module) that make decisions as to when the CPU cores within a system will transition to different dynamic frequency (speed) settings. Of course, this happens within certain constraints – a given platform can only run within the design limits of the underlying hardware after all – and includes considerations of available (battery) power, and the thermal design power (TDP) of a system which limits its upper frequency at system run-time. Frequency transitions within the microprocessor cores of a system are today managed cooperatively between the hardware and software, using a concept known as OSPM. Operating system power management allows a suitably enabled OS to make requests into the underlying hardware platform to perform frequency transitions between defined states. Traditionally, these are fairly coarse grained, but today can be much finer in nature. The exact mechanism used for runtime frequency transition varies between the architectures (for example, ARM can use ACPI – Advanced Configuration Power Interface – or the ARM specific PSCI – Power State Coordination Interface – while x86 typically uses only ACPI). Your x86 laptop uses special platform callbacks provided by the system’s ACPI firmware in the form of platform bytecode that is interpreted by the kernel’s ACPI interpreter when it wants to make power transitions – thus Linux doesn’t have to know the specifics underneath.
Linus released the RC1 kernel earlier in the day than his custom of late Sunday afternoon Pacific time Contemporary Linux systems have governors with names such as “ondemand”, “performance”, and “powersave” that are fairly coarse hammers used to switch the CPU frequency once certain overall system load metrics have been reached. Indeed, you’ve perhaps used a tool like “powertop” to eke more performance out of your laptop and followed its advice to switch CPU governor. What the existing governors don’t do, however, is to specifically collaborate with the underlying Linux process/task scheduler in making the determination whether a CPU frequency transition is warranted based upon the current needs of a running program. Indeed, it might be the case that a system can save more power overall by quickly throttling up the processors to get some work done, then throttle down. Existing governors look at certain overall system metrics based upon a much larger window of time than the scheduling windows (periods of time during which CPU time is allocated) of individual programs. In large part this is because traditional CPU frequency states were much coarser, coupled with the traditional latencies (period of time to react to frequency requests) being much higher so as to render continuous transitions impractical. This is no longer true of contemporary systems, which may have many dozens of possible frequencies that can be quickly adopted with only a very minimal amount of delay. Indeed, this is how your cellphone is able to get “all-day” battery life through continuous CPU frequency scaling adjustments (combined with many other features; there’s no one solution there). The newly introduced “schedutil” CPU performance governor interacts directly with the Linux CFS (Completely Fair Scheduler) to determine during each sub-second scheduling window what the likely needs of a task are, in order to request any change in frequency of the CPU to respond to these needs
immediately. It makes a frequency selection for the next period of time using a simple formula based upon current frequency, and the ratio of utilisation to maximum possible frequency. If the underlying CPU supports “fast frequency switching”, the system will make these adjustments very often, while hardware not supporting fast switching relies upon an alternative codepath which will adjust frequency less often. It will be interesting to see how well this new governor works in practice.
Announcements
John Kacur announced version 1.0 of the “rt-tests” toolset. This is a set of utilities designed to aid in validation and verification of the performance characters of the Preemptive Real Time (preemptrt – or just “real time”) kernel patches. Real-time Linux kernels are used by stock trading companies, industrial automation vendors, and myriad others with requirements around maximum system response latencies. The rt-tests assist in ensuring this by exercising common usage scenarios. Version 1.0 locks the ABI down as stable. At the same time, the latest version of the Real Time patchset (4.6.1-rt3) was released. You can find the latest rt-tests here: https:// git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git and the new linux-rt patches here: git://git.kernel.org/pub/ scm/linux/kernel/git/rt/linux-rt-devel.git
www.linuxuser.co.uk
17
Feature
Security Toolkit
SECURE YOUR
SYSTEM Take complete control of your system and network security with the best pro security tools Linux has to offer
18
Key Security Concerns
Issues for Linux Admins
Just as every account on your system is a potential path for a password cracker, every network service is a road to it Securing your Linux system means first restricting access to the user accounts and services on that system. After that, security means checking that no one has gotten around the defences that you have set up. Ubuntu, Debian and other systems based on those Linux distributions are designed to be secure by default. That means that there are no user accounts with blank passwords, and that most network services are off by default. While many tools are available for securing your Linux system, the first line of security starts with securing the user accounts on your system and the services that run on your system. Commands such as useradd, groupadd and passwd are the standard tools for setting up user and group accounts. Because most serious security breaches outside your organisation can come from intruders accessing your system on public networks, setting up firewalls is important for any system connected to the internet. The iptables facility provides the firewall features that are built into the Linux kernel. During most Linux installation procedures, you are asked to assign a password to the root user. Then you might be asked to create a user name of your choice and assign a password to that as well. If you need to have multiple administrators for a system, it is good not to share a single root password among many people. Instead, make use of sudo to give root access but ensure that users will be authenticated with their own password. Once Linux is installed, you can use commands or graphical tools to add more users, modify user accounts and assign and change passwords.
The problem with security advice is that there is too much of it and that those responsible for security certainly have too little time to implement all of it. The challenge is to determine what the biggest risks are and to worry about those first and about others as time permits.
Weak passwords
People’s choice of passwords continues to pose a huge security risk. Weak passwords are a general security concern in the Linux environment. Make it a practice to use ssh public keys to allow an account on one system to ssh into another system without supplying any password. At the very least, pick a moderately hard-to-crack password for your ssh key.
Open network ports
Just as every account on your system is a potential path for a password cracker, every network service is a road to it. Disable and uninstall services you do not need. Use the netstat -atuv command to find out which services are running. If there are services listed that you do not want to be provided by this box, disable them. A port being open actually means there is a program that can be exploited listening on that port.
Old software versions Linux and UNIX are not perfect. People find new vulnerabilities every month. Your challenge as an administrator is to keep up with the changes. One of the advantages of Linux is that when a fix is issued, it is very quick to install. Thus, it is always advisable to have a latest version of the software whenever you need it.
Insecure programs The use of insecure programs (such as FTP, rsh, NFS, etc) in other than carefully controlled situations, and failure to configure other programs properly, continues to be a major security concern. It is so common for system administrators to configure some programs incorrectly. Many programs are secure only if configured properly.
Stale accounts
It is important to understand that each account is a possible entry point into your system. As a stale account’s password will not be changed, it leaves a hole there that can act as a security bottleneck. Also, make it a practice to disable all unnecessary accounts that exist on the system.
www.linuxuser.co.uk
19
Feature
Security Toolkit
Key features • Vulnerability analysis – open source assessment tools, database management tools • Web applications – web vulnerability scanners, database exploitation tools • Sniffing and spoofing – network and WAN sniffers, network spoofing tools • Information gathering – network scanners, traffic analysers • Reverse engineering – debuggers • Stress testing – network stress testing tools • Forensics – forensic analysis tools • Password attacks – offline attack tools, online attack tools
The complete security toolkit Kali Linux A security-focused operating system that provides a vast array of tools Kali Linux is a Debian-derived Linux distribution designed for digital forensics and penetration testing. Kali Linux is preinstalled with over 300 penetration-testing programs, and they include nmap (a port scanner), Wireshark (a packet analyser), John the Ripper (a password cracker) and many more. The best part of Kali Linux is that, with it, you can test the vulnerabilities of your network and then take steps to secure it. Kali Linux is a comprehensive penetrationtesting platform with advanced tools to identify, detect, and exploit the vulnerabilities uncovered in the target network environment. With Kali Linux, you can apply the appropriate testing methodology with defined objectives and a scheduled test plan, resulting in a successful penetration-testing project engagement. Kali Linux can be used for many things, but it probably is best known for its ability to penetration test, or “hack,” WPA and WPA2 networks. There is
20
There is only one way that hackers get into your network, and that is with a Linux-based OS and a wireless card capable of monitor mode only one way that hackers get into your network, and that is with a Linux-based OS and a wireless card capable of monitor mode. Also note that, even with these tools, Wi-Fi cracking is not for beginners. Playing with it requires basic knowledge of how WPA authentication works, and moderate familiarity with Kali Linux. Note that Kali Linux is packed with a ton of software for testing security holes in your network. A few of the very popular tools are
Aircrack, Airbase and ARPspoof. Besides these, Kali has numerous other tools that are among the best security and penetration testing tools ever found. Kali Linux, being a free and open source software, is easily obtainable, and is used by both amateurs as well as professionals. Professionals use it for analysing vulnerabilities in information systems and networks, for forensic analysis, for finding security exploits, and for applications testing.
Protect against viruses ClamAV Identify and fix security weaknesses ClamAV is a popular open source antivirus software that has found its place in various situations such as end point security and e-mail scanning. It supports most popular UNIX operating systems (GNU/Linux, Solaris, FreeBSD etc) and will scan your server for viruses and security weaknesses. The best thing about ClamAV is that once a potential security threat is identified, it will prompt you on how to resolve that issue. Once you install and compile ClamAV, you can use the available unit tests to check whether the compiled binaries work correctly on your system. The Clam daemon – clamd – is a multi-threaded daemon and it uses libclamav to scan the files for viruses. The end user will configure the daemon through the configuration file clamd.conf. Some of the popular commands that are recognised by this daemon are Ping, which checks the state of the daemon, Shutdown, which performs a clean exit, and Scan file / directory, which scans a file or a directory respectively.
Stay behind a firewall IPFire A free Linux distribution that acts as a router and defence
When you are creating a new firewall rule, you need to be clear with the host and the destination IP addresses (or hosts) and, as with any other firewall, there can be three possible actions or any rule – ACCEPT, DROP and REJECT. Firewall rules are grouped into three sections – forwarding rules, incoming connections and outgoing connections. You can enter a number, at which position the new rule will be added. As the firewall ruleset is evaluated from top to bottom, the order of the rules matters a lot. The rules in each group are processed from top to bottom. The first rule that matches (where source, destination and all other settings are equal with those in the packet that is currently being processed) is executed and all rules after that are not evaluated any more. With the help of a checkbox, you can activate a rule. You can enable the log option so that logs will be populated for debugging purposes. You can see these log messages at “/var/log/ messages” file on your IPFire filesystem.
www.linuxuser.co.uk
21
Feature
Security Toolkit
Scan for rootkits Chkrootkit Has your system been compromised? If you suspect your system has been compromised, download and build chkrootkit. This will help you detect rootkits that may have been used to take over your machine. Chkrootkit can be considered as a collection of tools that will help in detecting the presence of rootkits. This can be very useful for Linux system administrators because it is a free and open source utility and is available for various distros, and it has the ability to detect almost all the latest rootkits – this is because of the fact that contributions from pen source contributors are helping to keep it up to date. Please note that chkrootkit uses C and shell scripts to perform a detailed process check, and scans system binaries to detect kit signatures. Upon detection, in most cases, it can remove rootkits, too. It also has a few algorithms that can report the behavioural trends of a possible rootkit, even if it is not yet officially supported.
01
Install chkrootkit
If chkrootkit is not already installed on your system, you can use the following command to install it:
03
Detect rootkits
A system administrator can use this program to detect rootkits in their system. The following are some of the ways to use this tool. This lists the various tests that are supported:
$ sudo /usr/sbin/chkrootkit -l $ sudo apt-get install chkrootkit This can be used to check sniffer command for PS Trojan:
$ sudo /usr/sbin/chkrootkit ps sniffer
It has the ability to detect almost all the latest rootkits
05
Run in a cron job
Chkrootkit is installed in /usr/sbin/ chkrootkit. We need to specify this path in the cron line. To create a cron job for chkrootkit, we can use the following command:
0 3 * * * /usr/sbin/chkrootkit 2>&1 | mail -s "chkrootkit output of my server" me@ mydomain.com) This runs chkrootkit once every three hours and sends its output to the specified e-mail address.
02
Chkrootkit comes with several options and system administrators can use these options in an efficient way to perform various operations. These can be used to see lots of data:
$ sudo /usr/sbin/chkrootkit -h $ sudo /usr/sbin/chkrootkit -x | more
22
06
chkrootkit options
04
Use Quiet mode
Use chkrootkit in Quiet mode and it displays if a binary is infected. The following comes in handy when we need to use Quiet mode:
$ sudo /usr/sbin/chkrootkit -q
Interpret output
We also need to understand how chkrootkit typically displays output. The phrases such as “Not found” or “Not infected” are displayed in most cases. When a rootkit is found, or if the presence of a rootkit is suspected, the output highlights it with “INFECTED”, or “The following suspicious files or directories have been found".
Store passwords securely Vaultier Keep your passwords organised and away from prying eyes Password managers are apps designed to both help you keep your accounts more secure and make it easier to remember unique passwords for every site. A password management tool will store all your passwords in one place and it is considered a great way for small business owners to
manage several different accounts while keeping information secure. Vaultier is a commonly used password manager. It has gained popularity because of features such as ease of user management – it’s pretty easy to add/remove members. It offers the ability to control access with
the help of its permission system, which has control over access to all information, and it offers a simple facility to generate passwords that allows you to generate strong passwords very easily. Vaultier is also considered as the best option for large-scale password management.
You can identify the vulnerabilities that could allow an external hacker to access sensitive information
Scan for vulnerabilities Nessus Your system is vulnerable to more than viruses, attacks and password leaks Nessus is a widely used vulnerability scanner that uses web interfaces to set up, scan and view reports. The great thing about Nessus is that it is an open source tool that offers a robust set of features and also has good community support. Nessus uses a modular architecture consisting of centralised servers that conduct scanning, and remote clients that allow for administrator interaction. You can deploy Nessus scanning servers at various points within your network and you can control them from a single client. With this, you will be able to effectively scan segmented networks from multiple vantage
points and conduct scans of large networks that require multiple servers running simultaneously. Nessus gives you a lot of choices when it comes to running the actual vulnerability scan. You can scan individual computers, ranges of IP addresses or complete subnets. There are over 1,200 vulnerability plugins in Nessus and you can use them to test for individual or sets of vulnerabilities. You can identify the vulnerabilities that could allow an external hacker to access sensitive information, check whether systems in the network have the latest software packages, and set up configuration audits.
www.linuxuser.co.uk
23
Feature
Security Toolkit
Search for predefined patterns The patterns are defined as Snort rules. Snort can report its findings in a number of output formats. Some of these formats include syslog, tcpdump, database, alert full, and unified. Unified output is considered as the fastest logging method and hence used widely
Command line options Snort comes with a variety of command line options. Some of the options can be specified in the config file instead of the command line. The general rule is that if you are just trying something out, specify the setting at the command line, and if you are planning on keeping the setting for a while, set it in the config file
Detect intruders Snort Monitor network traffic in real time Snort scrutinises each and every packet to see if there are any dangerous payloads. It can also be used to perform protocol analysis, content searching and matching. It can be used to detect various types of attacks such as port scans and buffer overflows. Snort is available on Windows, Linux, various UNIX, and all major BSD operating systems. It doesn’t require that you recompile your kernel or add any software or hardware to your existing distribution but it does require that you have root privileges. Snort is intended to be used in the most classic sense of a network intrusion detection system. All it does is examine network traffic against a set of rules, then alert the system administrator to
24
Snort examines network traffic against a set of rules suspicious network activity so that you can take appropriate actions. You can configure Snort in three different modes: sniffer mode, packet logger mode and network intrusion detection mode. In sniffer mode, Snort will read network packets and display them on the console. In packet logger mode, it’s able to log the packets to the disk. In network intrusion detection mode, Snort will monitor the network traffic and analyse
it against the rules defined by the user. Snort will then be able to take a specific action based on the outcome. It has a built-in real-time alert capability. Network intrusion detection systems are placed at certain points within the network so that you can monitor the traffic to and from all devices on the network. Once an attack or an abnormal situation is identified, an alert can be triggered to take corrective actions as needed.
01
Install Snort
Execute the following command on your Linux terminal
The snort.conf file controls everything about what Snort watches, how it defends itself from attack, what rules it uses to find malicious traffic, and even how it watches for traffic that isn’t defined by a signature
$ sudo apt-get install snort Once the installation is completed, you can check for a successful installation by using
$ snort -version
02
06
Program dependencies
$ $ $ $ $ $
apt-get apt-get apt-get apt-get apt-get apt-get
03
install install install install install install
apache2 mysql-server php5 php5-mysql php5-gd php-pear
Required directories
We need to create a set of directories required for the successful running of Snort. Use the following commands to create these files.
$ mkdir /etc/snort $ mkdir /etc/snort/rules $ mkdir /var/log/snort The snort.conf file controls everything about what Snort watches, how it defends itself from attack, what rules it uses to find malicious traffic, and even how it watches for potentially dangerous traffic that isn't defined by a signature. A very good understanding of what is in this file and how it can be configured is pretty essential to a successful deployment of Snort as an IDS in your environment. By default, this configuration file is located @ /etc/snort. If the configuration is located somewhere else, then you need to specify –c switch along with the file’s location.
Packet logger mode
In packet logger mode, the main idea is to record the packets to the disk. Specify a logging directory and Snort will automatically know how to get into packet logger mode. A simple command to operate Snort in this mode is
A few programs are needed when you’re going to run Snort. They are: apache2 for the web server, mysql-server for the database, php5 for the server-based script, php5-mysql, php5-gd for graphics handling, and PEAR (PHP Extension and Application Repository). Use apt-get to install all the above programs.
$ snort -dev -l ./log
04
Execute Snort
To execute Snort, use the following command
$ snort -c /etc/snort/snort.conf -l /var/ log/snort/ If you want to print only the TCP/IP packet headers to the screen, you can use
$ sudo /usr/sbin/snort -v
05
Generated alerts
A file called alert will be created in the /var/log/snort directory. This file contains the alerts generated while Snort was running. Snort alerts are classified according to the type of alert. A Snort rule specifies a priority for an alert as well. This lets you filter out low-priority alerts to concentrate on the most worrisome. You can also run Snort as a daemon process by using the –D option
$ snort -D -c /etc/snort/snort.conf -l / var/log/snort/ If you want to be able to restart Snort by sending SIGHUP signal to the daemon, you will need to use the full path to the Snort binary when you start it.
07
Network intrusion mode To enable network intrusion mode, use
$ snort -dev -l ./log -h $HOME_NET -c / etc/snort/ snort.conf Note that snort.conf is the name of your rule file and HOME_NET is a variable whose value is defined in the configuration file. The rule set defined in the snort.conf file will be applied to each packet and a decision will be made to see if an action needs to be taken or not. The default snort. conf references several other rules files, so it is a good idea to read through the entire snort.conf before calling it from the command line. If you are going to use Snort over a long period as an IDS, then do not use –v switch in the command line for the sake of speed.
www.linuxuser.co.uk
25
Feature
Security Toolkit
Scan your ports nmap Monitor and secure your network Nmap is a widely-used open source tool that supports more than 15 scanning techniques. It can be used by network administrators as well as advanced users. In addition to securing your network, it can also be used for hacking. Nmap is considered to be the world’s leading port scanner, and is a popular part of hosted security tools. Nmap as an online port scanner is able to scan your perimeter network devices and servers from an external perspective; i.e. outside your firewall. It can be used for security scans, to identify what services a host is running, to fingerprint the operating system and applications on a host, discover the type of firewall a host is using, or to do a quick inventory of a local network. It is, in short, a very good tool to know.
03 01
Install nmap
It is relatively easy to install nmap on your Linux box:
Uninstall nmap and dependencies
The following command can be used to remove the nmap package and any other dependent packages that are no longer required:
$ sudo apt-get remove -auto-remove nmap
$ sudo apt-get install nmap
Uninstall nmap
Removing or uninstalling nmap is a good idea if you are changing install methods. If you decide not to use nmap any longer and you care about the few megabytes of disk space it consumes, then remove nmap with:
$ sudo apt-get remove nmap
26
05
Scan multiple hosts
You can use nmap to scan multiple hosts at a time. If we are using IP addresses, we can specify a range – like 10.x.x.1 -6, or if we have hostnames, we can specify them on the command line separated by spaces
$ sudo nmap -O host1.com host2.com
04 02
Nmap is considered to be the world’s leading port scanner
Scan a host’s OS
The basic use of nmap is to scan any host and see what operating system it is running
$ sudo nmap -O localhost nmap requires root privileges to run this type of scan. Also, as can be seen from its output, nmap provides a lot of data. You can go through the output to learn more about the operating system it’s looking at.
06
Check open ports
Another important use of nmap is to discover the open ports on a network. If you run nmap with no options, it will scan for open ports and display the list of open ports as well as services running on those open ports.
09 07
Verbose option
Nmap GUI
Zenmap is the nmap GUI. This is a free and open source application that aims to make nmap easy to use for beginners. It also provides advanced features for experienced nmap users. The following commands can be used to install and run zenmap
If you need to capture more information, then the verbose option comes in handy. This can be done using the –v or –vv option. With these options, nmap provides more information about the various steps that are involved in nmap execution.
08
$ sudo nmap -vv localhost
$ sudo nmap -sV localhost
You can use this GUI if you need to check out any scanning-related activities.
this SSH server is exposed to the internet, then you can access it remotely and SSH tunnelling can be used to forward the traffic over an encrypted connection. You can set up a VPN – since SSH tunnelling works in a similar way to a VPN, you will also be able to set up a proper VPN on your OpenWrt router. You can also run server software – as mentioned earlier, OpenWrt’s software
repositories contain packages and thus allow it to work as a web server. Most importantly, you can capture and analyse network traffic – use tcpdump to log all the packets travelling through the router and this traffic can be analysed using a tool such as Wireshark. If you decide to install OpenWrt, then basically you are replacing your router’s built-in firmware with the OpenWrt Linux system.
What services are running on a host?
To see what services are running on a host, you can use the –sV option. With this option, nmap will do a more aggressive scan and will find out what versions of services are running on a specific host
$ sudo apt-get install zenmap $ sudo zenmap
Encrypt and monitor network traffic OpenWrt Advanced ways to use OpenWRT firmware on your router and network OpenWrt is a Linux distribution for your router. It basically offers a built-in package manager, and with the help of this packet manager you can install packages from a software repository. There are a lot of ways that you can use OpenWRT, but the following are some of the most useful for locking down network security. You can use the SSH Server for SSH Tunnelling – OpenWrt includes an SSH server and with this, you can access your terminal. If
SSH tunnelling can be used to forward traffic over an encrypted connection www.linuxuser.co.uk
27
Feature
Security Toolkit
Troubleshoot data security issues Wireshark Inspect data with a packet sniffer Wireshark is one of the best open source network packet analysers available today. This tool captures packets in real time and displays them in a human-readable format. With the help of Wireshark, you can inspect individual packets and inspect the data passing through a network interface (your Ethernet or
LAN, for example). The flexibility and depth of inspection allows this valuable tool to analyse security events and troubleshoot network security device issues. Wireshark can run on a variety of operating systems – such as Ubuntu Linux, CentOS and Windows. The series of data that Wireshark inspects are called frames and
these frames include packets. Wireshark has the ability to capture all of the packets that are sent and received over your network and it can decode them for analysis. It is a very useful tool for troubleshooting the network or network security issues and for debugging protocol implementations.
Packet display filter field
Command menus
A standard pull-down menu located at the top of the window
This is a field into which a protocol name or other information can be entered in order to filter the information displayed in the packet-listing window
Packet contents window This displays the entire contents of the captured frame
Packet-header details window
Packet-listing window
This provides details about the packet selected in the packet-listing window
This displays a one-line summary of each packet captured
01
Install dependencies
Before installing Wireshark on your system, you need to ensure that certain packages are available. They are GTK, libpcap and tcpdump
$ sudo yum install gtk $ sudo yum install libpcap $ sudo yum install tcpdump
02
wireshark-gnome package as follows:
Install Wireshark
After installing all the required packages, you can go ahead and install Wireshark on your Linux system with:
$ sudo yum install wireshark-gnome Once you have successfully installed Wireshark it’s time to run it
$ sudo yum install wireshark $ sudo wireshark If you are interested in having the Gnome Wireshark GUI, then you will have to install the
28
When you run the Wireshark program, the Wireshark graphical user interface will be displayed.
03
Edit interface settings
04
Capture packets
Wireshark lists all the available interfaces and as an end user, you have the freedom to edit any of these interface settings. You can then select the desired capture filter specific to an interface.
Once you click the interface name, you can see that packets will start to appear in real time. Wireshark captures each packet sent to or from your system. Whenever you want to stop capturing traffic, click the Stop Capture button.
Secure remote logins OpenSSH Connect to a PC remotely and securely OpenSSH is a very popular connectivity tool when it comes to remote login. It encrypts all traffic to eliminate any type of attack. For remote operations, it has ssh, scp and sftp. Key management is done with ssh-add, sshkeygen, and ssh-keysign. From a service-side perspective, it offers sshd and ssh-agent. OpenSSH is a freely available version of the SSH protocol family of tools.
SSH keys allow authentication between two hosts without providing a password. SSH key authentication uses a pair of keys
03 01
Install OpenSSH
Use the following commands for installing the OpenSSH server and client
$ sudo apt install openssh-server $ sudo apt install openssh-client
OpenSSH key generation
SSH keys allow authentication between two hosts without providing a password. SSH key authentication uses a pair of keys – a private key and a public key. Use the following command to generate the keys:
$ ssh-keygen -t rsa
06
Check the daemon
You can check whether the SSH daemon is running or not by using the following command:
$ ps -A | grep sshd If it is running, you can check whether it is listening for the incoming connections using
$ sudo ss -lnp | grep sshd By default, the public key is saved in ~/.ssh/ id_rsa.pub while the private key will be in ~/.ssh/
id_rsa
04 02
Configure OpenSSH
By editing the main configuration file (/etc/ssh/sshd_config), you can configure the default behaviour of the OpenSSH server application. Using the man page details of sshd_config, you will be able to get details about the various configuration directives used in this specific file. Once you make changes to the configuration file sshd_config, you need to save this file and restart the sshd server application so that these changes will be effective
$ sudo systemctl restart sshd.servcie
Disable password authentication
This can significantly improve your security. When you disable password authentication, it will only be possible to connect from computers you have specifically approved, so it’s highly recommended. In order to achieve this, you just need to change the value of the PasswordAuthentication directive to no in your configuration file. Do not forget to save your changes and then restart your SSH server so that your changes will be effective.
05
Specify accounts
Allowing or denying SSH access for a specific user or group of users can massively improve your security if users with poor security practices do not need SSH access. In order to allow only users jane and eric to connect to your computer using SSH, you can simply add the following line to the configuration file
AllowUsers jane eric
07
Log in from your own computer
After verifying that sshd is running on your system, try logging in from your own computer using the following command
$ ssh -v localhost You can see that the above command prints a lot of debugging information and will try to connect to your SSH server. You will be prompted to key-in your password and you should get another command-line when you type your password in.
www.linuxuser.co.uk
29
ENJOY MORE OF YOUR FAVOURITE LIN
SUBSCRIBE* & SAVE UP TO
37% WIN £5,400 WORT H
RAINIING RESQL T F POSTG O H T R 00 WO .co.uk WIN £5,4 linuxuser
OF POSTGRESQL
www.linuxuser.co.uk
TRAINING
www.
THE ESSENT MA GAZINE FOR THE GNUIAL GENERATION
M STEAINES MACSUHPERTEST
WIN
REMOTE ACCESS
£OF7RA5S0PI
ed review nsoles nux co gen Li First-
THE
WIN
£7 50 OF RAS PI
PRIZES
PRIZES
MASTER PI ZERO WAYS TO
N R OW YOU E EMBL D ASS ARE SE AN ARDW CHOOR OWN H YOU
AUDIOS R
O are EDIT softw g Best recordinng for d mixi an
SION INTRUCTIONs DETE y threat s
E T-TIM HY NIGHTOGRAPom fr a PHO e most m er Get thNoIR ca your
tif Iden op attack st and
PLUS
O DISTR TOM TEM A CUS W SYS MAKEYOUR NE FOR
A+
B+
How to make your
Cam elling ro by du
SECRETS OF PULSEAUDIO Discov
S OF ES PAGEERT GUID C BASI EXPoGame, MFUiroZEand more
er the power of the sound server
04/01/2016
GRAPHICS IN MONOGAME
Make games for multiple platforms
t14 enables redesign the Raspbecustomers to rry Pi board
001_LUD160 DIGITAL.i
ndd 1
AL.indd
61 DIGIT
“O OTTTTO O IS THE SUCCESSO VAGRAN R TO T” T” Mitchell
HOW TO USE GPIOS
PLASMA TIPS & TR 5
Hash Vagrant Creatimoto or
computerr lightning fast
SPRINGS.IO CONTAINERS
Manage scalable cloud servers
“WE CAN CHANGE THE CONFIGURATION OF THE PI” elemen
14:06
ARE
THE ES FOR TH SENTIAL MA E GNU GE GAZIN NERATIO E N
2B
LINUX PC
RS PI WAURNS ded RET bridge invabots
IVE PROJ EC
T CODE, www.linu VIDEOS & xuser.co.u SOFTW k
GET PI ZERO ONLINE
SPEED UP YOUR
Mon dPress, Wor
001_LUD1
FREE EXCL US PRO PYTHON TOOLS
ALSO INSIDE
COMPILE YOUR OW SOFTWA N Get to grips RE
Manage projects with Hack lights with the Git version control Systems program Energenie Pi-mote IR ming: Files & informa CamJam EduKit 3 & UPS PIco range tion reviewed
CRE SOFTWAATE FASTERRE
GNU Com with the piler Colle ction
WIN
£OF750 RAS PI PRIZES
Deploy an contained orchestrate rs like a pro
CONTAIN ERS ON CORE OS
1
ICKS
Master KRu keybindinnner, custom gs and more
07/12/2015 11:17
Quickly deploy your
PLUS 001_LU
D159 DIGITA
L.indd
1
clustere d environm ents
Backup sol Fitlet min utions Hack i PC Mo a noDevel robot with IR op GUIs
GAMES
MAKE
PYGAMWIETH
Cod with thee a classic Brea Pygame kout Zero fram game ework ISSUE 159
05/11/2
015 11:05
*US Subscribers save up to 40% off the single issue price.
See more at: www.greatdigitalmags.com
NUX MAGAZINE FOR LESS WHEN YOU SUBSCRIBE!
Every issue packed with… Programming guides from Linux experts Breaking news from enterprise vendors Creative Raspberry Pi projects and advice Best Linux distros and hardware reviewed
Why you should subscribe... Save up to 37 % off the single issue price Immediate delivery to your device Never miss an issue Available across a wide range of digital devices
Subscribe today and take advantage of this great offer!
Download to your device now
Tutorial
Tam Hanna
is bored to death by repetitive, mindnumbing work. The various automisation possibilities in the Bash shell are a natural ally to him: a well-written shell script can take many a job off anyone's hands.
Resources Bash https://www.gnu. org/software/bash/ Thunderbird https://www. mozilla.org/en-GB/ thunderbird/
Tutorial files available: filesilo.co.uk
32
Bash masterclass
Bash masterclass
Back to basics
Everyone who’s ever used a Unix workstation has seen it, yet few know its subtleties: let’s take a look at the Bash shell Even though Linux distributions like Ubuntu do a great job at hiding the often intimidating black terminal window from their users, Unix is and will forever be associated primarily with textual input systems. When used in text or terminal mode, commands are not handled by the operating system itself. Instead, a command line parser called a shell is responsible for handling I/O, parsing and program invocation. As time went by, a multitude of shells were developed. Bash, a free implementation of the Bourne shell, has
managed to establish itself as a bona fide standard. Shells are interesting not only due to their ability to take in user input. Skilled users and administrators can create so-called shell scripts: they encapsulate a series of user inputs, which can then be replicated at will. As this part of shell operations is less obvious than the normal invocation of methods, many users never find out about this truly wonderful feature. Let us be your guide to the world of shell scripting – a fascinating realm of timesaving features awaits!
Figure 1 The echo command emits the parameters passed to it
As learning stuff by rote is not particularly interesting, let us recreate a small but useful snippet of code found in many laboratories around the world. Scientific staff should not be typing prose: a team of transcribers trained in ten-finger typing spend their lives sitting in coordinating offices waiting for tapes. Sending these tapes by hand is annoying – as time went by, a shell script was created which dispatches the contents of an entire folder automagically.
What's in a script
In essence, a shell script is little more than a text file. It differs from its normal brethren by having the executable bit set – the shell will refuse to execute it if it is missing:
tamhan@tamhan-thinkpad:~$ touch test.sh tamhan@tamhan-thinkpad:~$ ./test.sh bash: ./test.sh: Permission denied tamhan@tamhan-thinkpad:~$ chmod +x test.sh tamhan@tamhan-thinkpad:~$ ./test.sh The individual commands inside the shell script are run one
Ride the merry-go-round One of the most powerful aspects of Unix is the ability to redirect one command's output to be the other one's input in a process called piping. By default, each Unix process has three pipes at its disposal: the standard input takes in data, while the standard output is responsible for data intended to be displayed. Stderr is a special channel used for transmitting error and status informations.
do echo "$file" done This version of the script illustrates multiple new aspects of shell programming. First of all, the for do done structure, which is used to create a loop iterating over a set of elements. Even though bash can work with static lists, getting them at runtime is much more interesting. The backtick character – be aware that it is `, not ', '' or “ - tells the shell that the string contained within is to be run as a shell command. Its result
In essence, a shell script is little more than a text file. It differs from its normal brethren by having the executable bit set after another as if they were typed into the shell interactively. This can be demonstrated via the following snippet, which leads to the results shown in the figure:
#!/bin/bash echo "Hail Puber!" echo "Thou art the destroyer of houses!" As a small explanation: the echo command emits the parameters passed to it to the standard output. It is commonly used to give instructions to users – we will keep using it during the rest of this tutorial. The line starting with #! is called a shebang. It informs the Kernel about which shell or command line interpreter is best suited for running the commands found in the file – setting it to the value shown here is recommended as some users use non-bash shells and could run into trouble.
Code ahoy!
Simply running input stored in a text file might be helpful in many cases: the shell would be much more useful if it could adapt its input dynamically in dependence of the situation at hand. Our above example would require us to run a command on each file found in a folder. Let us start out to this noble aim by creating a simple shell script that emits the name of each file found in the folder:
#!/bin/bash for file in `ls`
is then used as input. The dollar operator tells Bash to print the value of the variable in place of itself. Each invocation of echo gets a string containing the current filename, which then gets printed into the command line. The transcription application used in the previous example is annoying in that it creates one .txt and one .png file to go with each record. These metadata elements are useless for the coordinating office and should be stripped away – in theory, this job could be accomplished by using some string processing. A more elegant way looks like this:
#!/bin/bash for file in `ls *.mp3` do echo "$file" done Running the current version of the program in a folder containing some MP3s and some other files leads to a list mentioning but the multimedia files.
Collect some data
Sending emails without a subject is boring. Forcing technical staff to change the script before each invocation works, but becomes tedious as time goes by. A much more interesting approach would involve asking the user at runtime:
#!/bin/bash echo "Enter a title, please!" read emailTitle
www.linuxuser.co.uk
33
Tutorial
Bash masterclass
Giving people jobs without a progress indicator leads to burnout. This problem can be solved by putting a “running tally” into the subject line. For this, information about the number of files to be processed is important – it can be gathered by iterating over the contents of the files array another time before the actual sending process:
fileCount=0 for file in `ls *.mp3` do fileCount=$(($fileCount+1)) done With that, we are ready to create a second version of the script:
fileNow=1 for file in `ls *.mp3` do echo "$emailTitle File $fileNow of $fileCount" echo "$file" fileNow=$(($fileNow+1)) done Running it on a folder containing both MP3 and non-MP3 files yields the results shown in the figure – both file name and email subject are ready for deployment.
Power the Thunderbird
Many – if not most – shell scripts work by connecting preexisting Unix applications to one another. We will handle the actual email dispatch via Thunderbird: the various command line parameters are documented on the internet at sites such as http://kb.mozillazine.org/ Command_line_arguments_%28Thunderbird%29 and https://developer.mozilla.org/en-US/docs/Mozilla/ Command_Line_Options. Sending the email can be accomplished by creating an invocation string, which is then run on the command line. Using the information at the above-mentioned web sites tells us that the path to the attachment must be fully qualified; finding out the current working directory can be accomplished with a little trick:
for file in `ls *.mp3` do emailsubject= "$emailTitle File $fileNow of $fileCount" fullpath="$(readlink -f $file)" fileNow=$(($fileNow+1)) done The actual determination of the path to the file is accomplished via the readlink command. It takes either a symbolic link or a filename, and returns the “root path” to the element at the end of the indirection hierarchy. Some online examples show the use of the local keyword. In theory, it is to be used to limit the scope of a variable. In practice, using it outside of a function is strongly discouraged for compatibility reasons. The next step involves creating the actual response string, which is then passed to Thunderbird:
do ... fullpath="$(readlink -f $file)" thunderbird -compose "to='example@example. com',subject='$emailsubject', attachment='$fullpath'" & fileNow=$(($fileNow+1)) done By default, the Thunderbird task halts execution of the script until the window containing the newly generated email is closed. Putting an ampersand at the end of the command tells Bash to spawn the thread asynchronously and keep running the script. With that, work on the sending machine is complete. Running it leads to a “window tree” – clicking the Send button a few times ensures that the files are on the way to the coordinating office.
Settings for bashheads
Using Bash to run command line scripts is just part of the abilities of the shell. True professionals can and will configure their work environment to their liking – Bash
Figure 2 Our BASH script skilfully eliminates all non-MP3 files
34
Figure 3 The files are ready to go provides a variety of interesting tweaks. Using them effectively requires us to take a little detour into the world of shells. While Linux books list up to ten different shell types, we’ll focus on just two: a login and a normal shell. They might look the same, but they play completely different roles.
Saving one minute a day, a person working every day of one year will have racked up savings of six hours during 365 days The login shell is opened immediately after the user logs into a terminal. Its initialisation is accomplished via the am initialization sequence similar to the following – the exact files used vary a bit from distribution to distribution: etc/profile ~/.profile ~/.bash_profile Interactive shells are the ones opened in Terminal windows. Their initialisation sequence is a bit shorter: ~/.bashrc This information is important for us because each of these files can be used to house a variety of settings. They get applied only if the shell reads the relevant file during its
initialisation – if one user has something set in his bashrc file, other users will not be able to access it directly. The simplest configuration involves the settings of variables containing extra information. One common example is the path to the Android SDK, which should be declared in bashrc via an export statement:
export ANDROID_HOME=/home/tamhan/Android/Sdk This, of course, is but a small feature. Bash's behaviour can be customised via two sets of parameters hidden behind the set and the shopt commands – more information on the dozens of parameters can be found in the relevant main pages or on the internet.
Saving time
At the end of this story, our email script is ready to be deployed to the masses. Assuming that it will save just one minute a day, a person working every day of one year will have racked up savings of six hours during 365 days. This, however, is but a small taste of the possibilities provided by Bash scripting. The next issue of Linux User & Developer will have even more amazing content – stay tuned, and see you soon!
Power to the mailers Using Thunderbird for mass emailing is not particularly comfortable as each email needs to be signed off by hand. When sending large amounts of small emails, using a command line utility is more efficient. Unix developers can choose from a large selection of utilities – be careful, as some of them need quite a bit of configuration before they can be deployed.
www.linuxuser.co.uk
35
Tutorial
NAS
Build your own Linux-powered network-attached storage With its server roots, Linux is the perfect operating system for your own personal NAS Paul O’Brien
Paul is a professional cross-platform software developer, with extensive experience of deploying and maintaining Linux systems. Android, built on top of Linux, is also one of Paul’s specialist topics.
Resources FreeNAS
http://www.freenas.org/
Ubuntu
http://www.ubuntu. com/
Right A Linux powered NAS server provides a safe, accessible home for your vital data
36
In this tutorial, we are going to work through everything from choosing the right hardware for your NAS, through installing your Linux distribution and connecting from your Linux and non-Linux clients. Linux is incredibly versatile – it runs on a wide range of hardware and has a huge number of different ways of achieving what you are trying to do. Although we will talk about attached storage primarily in this tutorial, once you have a box set up running Linux, it can perform a whole host of other tasks. Before we start, have a think about any hardware you have lying around, your budget, how much storage you need and how important resilience is to the data you are storing. This will help you plan how you are going to proceed after reading through our tutorial. As well as the larger distributions such as Ubuntu, we are going to look at dedicated platforms for NAS devices, which have a smaller footprint and are more finely tuned to the task at hand. We’re also going to make sure that you’re not tied to one specific solution by using a widely adopted file system, so you’ll be able to switch between NAS operating systems without losing your precious data.
01
Hardware
The first step in planning your new NAS is to decide what hardware you are going to run it on. Now, this is Linux, which means that it’s going to run on pretty much anything. Serving files over the network isn’t really a particularly intensive task. With that said, there are elements to take into consideration. First of all, you probably want the connection to your network to run at a decent speed. This means a highspeed wireless card (if there’s no cabled option) or ideally, a Gigabit Ethernet connection into your router (or perhaps via a Homeplug, which we’ve found works particularly well – we have a 1200Mbps setup hooking up our NAS and it’s wonderful). After connectivity, the second most important thing is how you are going to accommodate your storage drives. Sure, you might have an ancient laptop kicking around in the cupboard, but can it take all the disks you want to put in? It depends very
Power usage One thing to be mindful of when you are setting up a NAS is power usage. This is something to consider when choosing hardware. Unless you specifically need the horsepower, consider a low power/mobile class processor. NAS machines typically run 24/7, but that doesn’t have to be the case. You can configure Wake On LAN or use a tool such as rtcwake to schedule timed wake.
RAID 1 is a good choice for a NAS, creating a mirror, perfect if you have two identical disks. RAID 5, another popular choice, requires at least three disks and is striped with distributed parity information across all the disks. This gives great performance as well as the ability to repair if one of the drives fails. High-end servers typically implement RAID in hardware – for a home NAS, software RAID can be implemented, which is just fine for all but the most demanding of applications.
much on what you want to use your NAS for. Will it be a pure storage box or will it be put to other uses too?
02
Dedicated NAS or multi-purpose box?
03
To RAID or not to RAID?
Even if you are using the oldest piece of computing hardware you’ve ever seen that’s been gathering dust under you desk for the past ten years, there’s a good chance that due to the versatility of Linux, it can be more than just a NAS box. If you start off with a basic distribution set up for network storage, you can bolt on other functionality later. We’ve found a NAS box to be a capable server for home security courtesy of ZoneMinder, a useful web server for testing or serving a local website thanks to a LAMP stack or even, if your hardware power allows, an in-house Minecraft server for all the family. Your ultimate goals can affect your choice of distribution/software later on, as well as your choice of hardware of course.
04
Installing a dedicated NAS platform
If you want to dedicate your machine to running a NAS platform rather than a more general Linux distribution, the very popular FreeNAS, based on FreeBSD, is a great choice. FreeNAS is easy to install and runs on virtually any piece of hardware. FreeNAS includes the ZFS file system and is widely used both in the home and in the enterprise. FreeNAS’ best features include a full web interface, a wide range of file sharing protocols, disk ‘snapshots’ for restoring changes and advanced replication, to mirror the contents of your NAS to a remote server. FreeNAS is distributed as an ISO. Burn it to a CD or copy it to a USB stick using unetbootin and away you go! A good tip we’ve found for installing new OSs is to have a dry run first using virtualisation software such as VirtualBox or VMware. That way you can easily experiment before moving on to your actual server hardware.
You might be using your NAS just as a dumping ground for random files that you don’t have space for on your other machines, or perhaps it’s for storing those vital family photos away from your main machines. If you have space in your machine, you should consider implementing RAID. Using RAID 0 creates a stripe using two or more disks, giving you a large, high performance space, but no fault tolerance.
If you have space in your hardware, you should consider implementing RAID www.linuxuser.co.uk
37
Tutorial
NAS
An alternative popular filesystem for NAS drives is BTRFS. Again majoring on fault tolerance and as such ideal for a NAS setup, BTRFS has a raft of features including the ability to handle huge sizes of storage, snapshots, compression, RAID, built in incremental backup and much more. Typically these advanced filesystems are best limited to dedicated NAS drives in a setup. The distro itself can be booted from a dedicated hard disk, SSD or even a USB stick running EXT4, while the NAS drives run a more advanced filesystem.
HP Microserver A wildly popular machine amongst home NAS builders is the HP Microserver, a baby server setup that wouldn’t look out of place in any home office. Microservers have four bays with RAID and an internal USB connection ideal for booting a memory stick. They even feature HP’s popular iLo enterprise class remote management tool. The HP Microserver frequently retails for around £150.
A text-based installer takes you through the very quick installation, which culminates in the Web UI being available via your browser to complete the setup.
05
Setting Ubuntu up for NAS
Aside from the dedicated NAS distributions such as FreeNAS, OpenFiler, OpenMediaVault (which is based on Debian) or similar, a full Linux distribution such as Ubuntu still works well. Although it may not be as ‘light’ on your hardware, as well as the versatility, a big benefit is the huge amount of peer-to-peer support available for the platform. Depending on how you are going to use it, it may be worth considering installing the server rather than the desktop version of Ubuntu, particularly if you are going to run headless.
07
Enabling network protocols
Once you have your system up and running and you’ve formatted your drives, configured your RAID, etc,
It’s really useful to enable access to your NAS machine from the outside world, securely Bear in mind that if you go for a full distribution, you won’t get the benefits of the NAS management GUI that you get with solutions like FreeNAS – you will likely be managing the features of your server from the command line. But that’s why we love Linux, right? At its simplest, installing the OpenSSH server gives secure access to your files over SSH/SFTP. You can then install and configure the additional protocols needed by your other machines, such as SMB, AFP / Bonjour etc.
06
Choosing a filesystem
Choosing the right filesystem for your NAS is important because it impacts both performance and reliability. As mentioned, ZFS is the filesystem of choice for FreeNAS and is also a solid option for full distribution. Of course, ZFS offers RAID redundancy, and one of its core strengths is that it’s been designed to easily allow you to expand your storage. Out of space? Add another drive. Job done. ZFS is now finally native in Ubuntu Xenial, which is an added bonus.
38
then you need to enable your network protocols so that your clients can get access. Exactly what you need to enable depends on how you plan to get access (and remember, SSH/SFTP access is likely enabled by default), but the FreeNAS configuration screen gives us a good idea of where we need to start. FreeNAS offers AFP for Apple machines to connect to, NFS for Unix machines to connect to, CIFS/SMB for Windows connectivity together with iSCSI for block access and WebDAV support for a variety of clients. These aren’t quite hard and fast rules, but they are a solid starting point! AFP for Ubuntu lives in the Netatalk package (consider installing Avahi also), NFS is in nfs-kernel-server, CIFS/SMB is in Samba and you’ll need open-iscsi and Apache for WebDAV. As well as AFP, Macs can also connect via CIFS/SMB, so it’s a good choice for an initial cross-platform protocol, although you will want to enable AFP if you want to use your NAS for Apple Time Machine backup.
08
Configuring access
No question, one of the trickiest parts of the process of setting up a NAS is configuring access for your clients. Linux plays nicely with other operating systems, but configuring the users correctly can be a pain. Let’s look at how a very simple Samba share is configured. It can actually be done graphically on Ubuntu, courtesy of
the system-config-samba package, or alternatively by manually editing the /etc/samba/smb.conf file if you prefer. The document is split into sections, but the main one to start with is the ‘File Sharing’ section. In the [homes] section, which shares your home directories by default, you’ll see options to enable ‘browseable’ and ‘writeable’. After enabling this initial share, use sudo smbpasswd –a username to set your Samba passwords and then sudo sbmd reload to restart the server. You should now be able to connect from your other machines (use the \\ notation in Windows or the smb:// notation in Mac Finder to connect).
cloud, with clients for Windows, Mac and yes, of course, Linux. You can even restore files from Crashplan’s Android and iOS apps. If you don’t want to shell out for the subscription, the free option means that if you buddy up with a friend who also has a NAS, this will enable you to use each other’s machines as backup destinations. The data is fully encrypted so your privacy is maintained, but you have an effective safety net should the unthinkable happen.
Left A cloud backup plan will give you added protection just in case your NAS ever fails Below FreeNAS allows machines running Windows, OSX and Linux to connect to the drive
09
Access via the internet
If you have a fast internet connection at home, it’s really useful to enable access to your NAS machine from the outside world, although you want to do this securely of course. The first thing you need to do is configure the port forwarding on your internet router. All routers, even locked down ones supplied via your ISP, will include an option to forward specific ports on the external facing IP to a specific machine within your network. Typically, home internet connections use dynamic IPs, so you’ll need to use a service such as dyndns to redirect a static hostname to your changing home IP. Again, many routers include dynamic DNS support and even if they don’t, a number of Linux based dynamic DNS clients are available, such as DDclient. An effective way of providing external access is via SFTP. It’s secure, can be locked down extremely tightly by authenticating with public keys and it also enables additional functionality such as rsync. You’ll likely want to ssh into your NAS anyway with public key auth, so it is minimal additional effort.
10
Cloud backup
Having all your data stored away on a RAID-equipped NAS is all very well, but what if your house goes up in smoke, someone walks off with your server or even, somewhat less dramatically, multiple drives fail at the same time? You should consider a cloud backup service – and a number of options are available for Linux. Our favourite by far is Crashplan. It does exactly what you’d expect, and has some extra neat tricks. Without paying out a single penny to the service, Crashplan can turn your NAS – or indeed any of your machines – into a backup destination. Shell out for a subscription though ($5/month for 1 computer, $12.50/month for a 2-10 computer family pack) and you can enjoy unlimited backup to the Crashplan
www.linuxuser.co.uk
39
Tutorial
iproute2
Networking and traffic control with iproute2 Discover the tools of the iproute2 Utility Suite and make your life easier!
$ dpkg -L iproute2 Most of the commands of the iproute2 Utility Suite require a good knowledge of networking in both theory and practice. Misusing some of the utilities might harm the connectivity of your Linux system so always “measure twice and cut once�. The best way to learn the presented commands is to use them frequently, which will help you clarify their numerous subtle details. However, apart from ss and ip, which are very helpful for every Linux administrator, you will not have to use most of the other commands on a daily basis unless you are trying to solve a particular issue. If you need to find more information about the utilities of iproute2, you should install the iproute2-doc package:
Mihalis Tsoukalos
is a UNIX administrator, a programmer (UNIX & iOS), a DBA and a mathematician. He has been using Linux since 1993. You can reach him at @mactsouk and http://www.mtsoukalos. eu/
Resources
# apt-get install iproute2-doc
Iproute2 https://en.wikipedia.org/ wiki/Iproute2 http://bit.ly/1wDV59R
The iproute2 Utility Suite includes the following command line utilities: ss, ip, rtmon, tc, lnstat, nstat, bridge, rtacct, routel and routef. Each utility comes with its own manual page that you can use to learn more about the various options of a command. It is helpful to know that two types of network traffic exist: inbound traffic and outbound traffic. The manipulation of network traffic is called traffic shaping. The iproute2 Utility Suite offers tools that, among other things, let you control network traffic. Additionally, iproute2 includes tools that can help you inspect your current network configuration. Please have in mind that testing traffic-shaping rules and changing the configuration of network interfaces should never take place on a production machine or network.
01
The ip command (1)
The ip command can perform many things and do the job of many existing command line utilities, including netstat, arp, route and ifconfig. Apart from the handy ss command, ip is the most valuable and important command of the iproute2 Utility Suite. Using the ip command, you can see the IP addresses associated with each network interface on a Linux system:
$ ip addr If you want to inspect a specific interface, you can use the next syntax:
$ ip addr show eth0 General info about iproute2
Since the iproute2 Utility Suite is usually installed by default, you will not need to install it yourself; however, in the unusual case you cannot find it on your Linux system, you should install the iproute2 package using your favourite package manager. Executing the following command on a Debian or Ubuntu system reveals the contents of the iproute2 package:
40
02
If you want to get useful statistics about a network interface, you can use the following command:
$ ip -s link show eth0
The errors and dropped fields of the previous command are very important and can be used for identifying a faulty network card, a damaged network card or a broken network switch device. The following command shows how to turn off and on a network device:
$ ip link set eth0 up $ ip link set eth0 down The next command assigns an additional IP address to eth0:
# ip addr add 192.168.2.123/24 dev eth0
The ip command can perform many things and do the job of many existing command line utilities, including netstat, arp, route and ifconfig
The following command deletes the previously assigned IP address from eth0:
# ip addr del 192.168.2.123/24 dev eth0
03
More about the ip command The ip command can do many more things than the ones presented in this tutorial, including configuring and changing routing rules, assigning multiple IP addresses to a network interface, changing the MAC address of a network interface, viewing the ARP table and dealing with multicast addresses. The ip command works with both IPv4 and IPv6 addresses. Using ip with the -4 option will display IPv4 information only whereas using it with the -6 option will display IPv6 information.
The ip command (2)
The way your Linux routes its TCP/IP traffic is very important because a misconfigured routing table can isolate your Linux machine from the your local network and the internet. Usually, you will not need to change your routing table. However, the inspection of the routing table should be your first move when you find that a Linux machine with multiple network interfaces is having connectivity issues. You can view the routing table of a Linux machine with the following command:
$ ip route show You can get a more detailed output with the next version:
$ ip route show table local You can then change the default route of your Linux machine as follows:
# ip route add default via 192.168.x.y Please use the previous command with extra care because a wrongly defined default route –also known as default gateway – can limit the connectivity of your Linux system. If an ip command makes changes to your Linux system, you will most likely need root privileges to execute it. Otherwise you will get an error similar to the following:
$ ip route add default via 192.168.1.1
04
The /sbin/rtmon command
The rtmon utility is used for monitoring the state of network devices and storing them to a binary file, as follows:
$ /sbin/rtmon file /tmp/rtmon.log $ ls -l /tmp/rtmon.log -rw-r--r-- 1 mtsouk mtsouk 65840 May 3 10:32 /tmp/ rtmon.log $ file /tmp/rtmon.log /tmp/rtmon.log: data As you can see, this saves the output of rtmon to the /tmp/ rtmon.log file. You can keep IPv4-only data with the -4 option and IPv6-only information with the -6 option. You can also use the ip(8) command to monitor a device. The difference between rtmon and ip is that rtmon writes its output to a binary log file instead of just displaying the information on screen. The ip monitor command allows you to display the contents of a log file generated by rtmon:
$ ip monitor file /tmp/rtmon.log
www.linuxuser.co.uk
41
Tutorial
About SNMP SNMP stands for simple network management protocol. Nevertheless, if you look into it you will find that SNMP is a very sophisticated way to get statistics from many kinds of devices including routers, switches and computers. The main disadvantage of SNMP is its complexity.
iproute2
The log entries that both rtmon and ip monitor produce are of the following type:
3: eth0 inet6 2a01:7e00::f03c:91ff:fe69:1381/64 scope global mngtmpaddr dynamic valid_lft 2592000sec preferred_lft 604800sec The output contains fairly low-level network-related information. When a network interface goes down, the produced information will look similar to the following:
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_ fast state DOWN group default link/ether 08:00:27:3a:f8:b2 brd ff:ff:ff:ff:ff:ff
05
The /sbin/tc command
The tc command line utility manipulates kernel structures related to IP configuration including structures related to traffic control. You can use tc to see the existing rules for a network interface (eth0):
$ /sbin/tc qdisc show dev eth0 This is the only use of tc that will not make any changes to your Linux system. All other usages of tc make low level modifications and should be used with great care. For example, the following command adds a rule that slows down all network traffic by 300ms:
# tc qdisc add dev eth0 root netem delay 300ms The next command deletes all existing rules for a specific interface:
After applying the delay rule, the output from the ping command would be similar to the following:
$ ping www.google.com PING www.google.com (172.217.16.196) 56(84) bytes of data. 64 bytes from fra16s08-in-f4.1e100.net (172.217.16.196): icmp_seq=1 ttl=58 time=313 ms 64 bytes from fra16s08-in-f4.1e100.net (172.217.16.196): icmp_seq=2 ttl=58 time=313 ms As you can see, the second ping command is slower than the first one by approximately 300ms! The following output verifies that the delay rule is active:
# tc qdisc show qdisc netem 8001: dev eth0 root refcnt 2 limit 1000 delay 300.0ms Please, do not forget to delete the delay rule when you are done with it:
# tc qdisc del dev eth0 root # tc qdisc del dev eth0 root The following output shows a small example of traffic shaping with the help of the previous rule that slows down all network traffic. First, you ping www.google.com without any rules:
$ ping www.google.com PING www.google.com (172.217.16.196) 56(84) bytes of data. 64 bytes from fra16s08-in-f4.1e100.net (172.217.16.196): icmp_seq=1 ttl=58 time=13.2 ms 64 bytes from fra16s08-in-f4.1e100.net (172.217.16.196): icmp_seq=2 ttl=58 time=13.2 ms
If your Linux machine acts as the router of a whole network, the routef command can destroy the connectivity of the entire network 42
If you are undecided or not completely sure about the deletion of the rule, you can reboot your Linux machine!
06
The lnstat command
The lnstat utility is a modern replacement for the rtstat utility. You can find out the version of lnstat by running the lnstat --version command. The lnstat command combined with the â&#x20AC;&#x201C;j option generates output in JSON format which can be easily inserted into a NoSQL database such as MongoDB. As the default output of lnstat is very busy and hard to read, getting JSON output is considered a very good idea. You can get the keys that interest you by executing lnstat with the -k option:
$ lnstat -k rt_cache:in_ slow_mc,out_hit rt_cache|rt_cache| in_slow_| out_hit| 0| 0| You can also choose to get all data from a particular section only:
$ lnstat -f rt_cache -j The next command gets 10 samples of the rt_cache:out_slow_ tot key every five seconds in JSON format:
$ lnstat -k rt_cache:out_ slow_tot -i 5 -s 1 -c 10 -j
$ lnstat -k rt_cache:out_slow_tot -i 5 -s 1 -c 10 -j | awk -F: '/out_slow_tot/ {print substr($2, 1, length($2)-1)}' | awk '{ sum+=$1} END {print sum}' As you can see from the examples, you can process any text output with AWK in order to extract the desired information or generate a summary depending on what you want to achieve.
09
The routel and routef commands
The routef executable is a simple script that flushes the routing table, which means that it will delete all routes â&#x20AC;&#x201C; this can harm the connectivity of your Linux machine. More importantly, if your Linux machine acts as the router of a whole network, the routef command can destroy the connectivity of the entire network! Similarly, routel is another script that uses ip to list all routes in a pretty format. You can see the contents of each script as follows:
$ cat /usr/bin/routel $ cat /usr/bin/routef
07
The nstat and rtacct commands
08
Processing output with the AWK command
The nstat and rtacct commands can be used for monitoring kernel SNMP (simple network management protocol) counters and network interface statistics. Once again, using nstat with the â&#x20AC;&#x201C;j option generates JSON output.
10
The bridge command
The bridge command allows you to change and show bridge addresses and devices. A bridge can be used for relaying traffic between ethernet interfaces. You will need root privileges to execute the bridge command if you want to make any changes.
The next command returns the default gateway of a Linux machine:
$ /sbin/ip route | awk '/ default/ { print $3 }' The subsequent command counts the various states that were found in a log file created by rtmon:
$ ip monitor file rtmon.log | grep state | awk {'print $9'} | sort | uniq -c As rtmon.log is a binary file, you will have to replay it using the ip monitor command that generates text output. You can list all active routing tables as follows:
$ ip rule list | awk '/lookup/ {print $NF}' The next command uses the lnstat command to add all rt_ cache:out_slow_tot key values and prints the result:
www.linuxuser.co.uk
43
Tutorial
GNU Make
Run new software on old distros In a world full of pre-compiled binary packages, sometimes you still need to face the intricacies of the GNU Make build-system
Toni Castillo Girona
Toni Castillo Girona holds a bachelorâ&#x20AC;&#x2122;s degree in Software Engineering and works as an ICT research support expert in a public university sited in Catalonia (Spain). He writes regularly about GNU/Linux in his blog: disbauxes.upc.es.
It is quite difficult to catch up on new software versions. Newer versions need newer libraries, and some GNU/Linux distros cannot keep pace. This is why we must upgrade that old distro to a new release, apparently. Security concerns aside, it is not necessary to do so. If our distro has not reached its end of life yet, there is no need to hurry it along. Open source projects can be easily adapted and built on older distros as long as we can do the same to their dependencies (i.e., libraries). Sometimes we will not be that lucky: enter proprietary software! Even for these extreme cases there is hope. To ease the burden, projects like Debian Backports can come in handy. But you must face it: sometimes there will not be back-ported packages, or you will be dealing with a proprietary piece of software with no chance at all to grasp its source code. In such desperate cases, learning to back-port and do some other tricks will save you some pain. Not to mention that you will not be rushing the upgrading of a stable system just because some users fancy a newer version of a particular program!
Resources ELF symbol versioning and the GNU C library Debian Backports hhttp://backports. debian.org
Debian LTS Project https://wiki.debian.org/ LTS/
Telegram Desktop https://desktop. telegram.org/
GNU libc6 https://ftp.gnu.org/gnu/ libc/
Memcpy VS memmove http://bit.ly/1X7cAwt
Right Running two different versions of evince on a GNU/ Linux Jessie distro
44
One of the most common issues that arises as soon as we try to execute a binary that has been built on a modern GNU/ Linux distro concerns the GNU C library. Modern distros tend to ship with newer glibc versions. Whenever a new piece of software is built on that particular distro its ABI (application binary interface) can be broken, rendering that binary unusable on older distros. If you are dealing with open source programs, all you need to do is to build it on that old distro and it should work. Sometimes you will be facing a
proprietary piece of software in which source code is not available. For cases like these, our next technique will come in handy. Your first scenario is as follows: you have a Debian GNU/ Linux Wheezy box. At the time of writing, the LTS project will be serving security updates until 31 May 2018. Although an upgrade must be planned and executed sometime in the near future, there is no need to rush. Now imagine you want to use Telegram Desktop. As soon as you download the precompiled binary for GNU/Linux and execute it, you get:
~$ ./Telegram ./Telegram: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by ./Telegram) ./Telegram: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by ./Telegram) Knowing that this software is open source, you can follow the building instructions and be done with it. But if weâ&#x20AC;&#x2122;re talking about proprietary software, another approach would be necessary. According to the previous execution errors, we need at least glibc 2.15 to run Telegram Desktop. To know which glibc version your distro ships with, ask the very same glibc!:
~$ /lib/x86_64-linux-gnu/libc.so.6|grep release ...stable release version 2.13 ... For each symbol (function) Telegram Desktop needs, you can see which ones are from newer glibc versions than 2.13 by running the readelf command this way:
~$ readelf -s Telegram |grep GLIBC_2.1[456789] 322: 0000000000000000 0 FUNC GLOBAL DEFAULT UND memcpy@GLIBC_2.14 (18) 422: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __fdelt_chk@GLIBC_2.15 (22) So the binary needs memcpy from glibc 2.14 and __fdelt_chk from glibc 2.15. To run this sort of binaries, follow these steps: • Download, build and install a proper GNU C library in a separate directory. • Set LD_LIBRARY_PATH according to the directory where the new glibc resides. • Load the Telegram Desktop binary with the appropriate linux dynamic linker/loader, ld-linux.so.2. • According to the first step, you have to download, build and install glibc 2.15. These commands will do:
~$ wget https://ftp.gnu.org/gnu/libc/glibc-2.15.tar. bz2 ~$ tar xvfj glibc-2.15.tar.bz2 ~$ mkdir glibc-obj ~$ cd glibc-obj ~$ ../glibc-2.15/configure -prefix=/usr/local/ glibc215 -enable-add-ons ~$ make # make install Once the library is installed, set LD_LIBRARY_PATH so that it points to the directory where the new glibc resides. Finally, you need to load the binary with the glibc2.15 dynamic linker/loader:
~~$ LD_LIBRARY_PATH=/usr/local/glibc215/lib:$LD_ LIBRARY_PATH /usr/local/glibc215/lib/ld-linux-x86-64. so.2 ./Telegram If you run the previous command, you will get some errors concerning missing shared libraries. This is because now the dynamic linker/loader is looking for shared libraries only in /usr/ local/glibc215/lib. So you need to add additional directories in the shared library search path:
conclude, set its executable bit:
The hacking approach
# chmod +x /usr/local/bin/telegram.sh From now on, you can easily execute Telegram Desktop by running:
~$ telegram.sh&
Newer glibc versions not only fix issues and supersede symbols, but add new functionalities as well Building software for backwards ABI compatibility
Sometimes it’s desirable to build a program on a modern GNU/Linux distro and redistribute it to older distros, making sure the resulting binary will run. In such cases, taking the ABI compatibility into account is mandatory. Imagine you have a program that runs perfectly well on a GNU/Linux distro that ships with glibc >=2.14. This program calls the memcpy function at some point. Grab your favourite ASCII editor and write this down:
#include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char **argv){ char *p = (char *)malloc(6); if (p) { memcpy(p,"GLIBC\0",6); printf("%p holds %s\n", p, p); free(p); } return 0; }
~$ LD_LIBRARY_PATH=/usr/local/glibc215/lib:/usr/lib/ x86_64-linux-gnu:/lib/x86_64-linux-gnu:$LD_LIBRARY_ PATH /usr/local/glibc215/lib/ld-linux-x86-64.so.2 ./ Telegram
Save it as test.c and compile it on a GNU/Linux distro with glibc >=2.14:
This time the Telegram Desktop client runs without issues. You can speed up the process of executing Telegram Desktop by writing a trivial wrapper. Write this down using your preferred ASCII editor:
If you deploy this binary on older distros with a glibc prior to 2.14 and run it you will get the usual error:
#!/bin/bash TELEGRAM=/usr/local/Telegram/Telegram GLIBC=/usr/local/glibc215/lib LD=$GLIBC/ld-linux-x86-64.so.2 LD_LIBRARY_PATH=$GLIBC:/usr/lib/x86_64-linux-gnu:/ lib/x86_64-linux-gnu:$LD_LIBRARY_PATH $LD $TELEGRAM Save it as telegram.sh and put it in the /usr/local/bin directory. To
ELF versioning and our ABI issues concerning glibc can be circumvented by patching the binaries. Of course this is far more complex than installing a new glibc version in a separate directory, but sometimes it will work nicely. The idea is to weaken the symbol dependency you do not have and then provide the binary with a substitute of sorts with LD_ PRELOAD.
~$ gcc test.c -o test
./test: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by ./test) Using readelf you quickly find out that the symbol this binary needs from glibc2.14 is memcpy:
~$ readelf -s test|grep "GLIBC_2.14" 6: 0000000000000000 0 FUNC GLOBAL DEFAULT UND memcpy@GLIBC_2.14 (3)
www.linuxuser.co.uk
45
Tutorial
What about the kernel? We have focused on user-land software (programs and libraries), but of course it is feasible to install a new kernel on your old distro too. If you want to run a kernel version that comes out-of-the-box for your next distro release, you can get its sources and build them on your old distro. Don't forget to keep the previous version as well lest the feared Kernel Panic message show up!
GNU Make
However, versions prior to glibc 2.14 do have memcpy as well, i.e. our binary should run on versions prior to 2.14. So get the GNU C library 2.14 sources and read up on its Changelog file. You will find this:
(memcpy): Provide GLIBC_2_14 memcpy. sysdeps/x86_64/multiarch/memmove.c: Include <shlibcompat.h> (memcpy): Provide GLIBC_2_2_5 memcpy This is a well-known issue concerning glibc 2.14. In short, the developers of glibc2.14 changed memcpy and mapped it to memmove. So whenever calling memcpy on glibc2.14 or newer, a call to memmove will actually be made instead. The main difference between the two is that memmove makes sure that origin and destination do not overlap, whereas memcpy does not have this restriction. Well, glibc 2.14 still has the old memcpy implementation, only that now it is called memcpy@GLIBC_2_2_5. Therefore, you can build the program on the newer GNU/Linux distro making sure that the call to memcpy will be executing the old memcpy implementation by adding an in-line assembly instruction to its code. Get back to your ASCII editor, open test.c and add this line:
__asm__(â&#x20AC;&#x153;.symver memcpy, memcpy@GLIBC_2_2_5â&#x20AC;?);
Open source projects can be easily adapted, compiled and built on older distros After that, build the software the usual way. Your new binary will be calling the old memcpy implementation prior to 2.14. Check it by running readelf:
readelf -s test_compat|memcpy 1: 0000000000000000 0 FUNC UND memcpy@GLIBC_2.2.5 (2)
ABI issues aside, newer software versions can be built on older GNU/Linux distros by means of back-porting, i.e, taking a package from a newer distro release, adjusting it and re-compiling it to run on a previous one. The Debian
46
~$ apt-get show gnuplot|grep Version: Version: 4.6.6-1~bpo70+1 Version: 4.6.0-8 So you can go as far as 4.6.6-1~bpo70+1 Needless to say, it is still feasible to install the latest gnuplot version. Get it from its website and save it in your Debian Wheezy box, then untar its source tree:
GLOBAL DEFAULT
So if you deploy this binary to computers running glibc < 2.14, it will work. Bear in mind that this sort of fix may have an impact on the new binaries and the systems where they will be executed. In our example, memcpy@GLIBC_2_2_5 is less secure than memmove. You may as well change any reference to memcpy with memmove and be done with it too, and it will be better. Sometimes these changes will not be that obvious or even feasible. Newer glibc versions not only fix issues and supersede symbols, but add new functionalities as well. In such scenarios, use the first technique from this tutorial to circumvent the issue.
Back-porting and beyond
Backports Project is a good place to start, if you have Debian, of course. Other distros have their respective backports too. Therefore, before going any further, have a look at your back-ports repository servers: maybe they will hold an already back-ported package ready to install! For those cases where they will not, follow these steps: Make a list of all the package dependencies, i.e: which libraries or additional programs the package needs in order to be built. Then make sure you will be able to compile the package and its dependencies using the compiler in your distro. Sometimes certain packages need some compiler flags or extensions that older compilers do not provide. If you do not have the required compiler you'll have to install it first. Compile and install each dependency in a separate directory, i.e if your package needs certain libraries, don't forget to install them under /usr/local/lib or something similar; this way you will always be safe and the system will not be accidentally damaged. Compile and install the package in a separate directory too, making sure you are altering its Makefile or whatever means it is using to be built, so that all the necessary dependencies are found (if need be). In case the next release does not have the version you are expecting to install in its repositories, you can go to the official website for that particular piece of software and get its sources directly from there. This is precisely your next scenario: you need to install gnuplot version 5.0.3. This is the most recent release (as of writing). The first thing to do is to check which version you can install by means of Debian Backports:
~$ tar xvfz gnuplot-5.0.3.tar.gz Once this is done, go to the directory and follow the wellknown procedure for building software with auto-tools, making sure you are passing the --prefix flag to the configure script in order to install gnuplot in a separate directory:
~$ cd gnuplot-5.0.3 ~$ ./configure -prefix=/usr/local/gnuplot503 ~$ make ~$ make check # make install Once the building process is finished, create a soft link pointing to the new binary under the /usr/local/bin directory and don't forget to call it something like gnuplot5.0.3:
# ln -s /usr/local/gnuplot503/bin/gnuplot /usr/ local/bin/gnuplot5.0.3
Left After compiling and installing glibc2.15 in a separate directory, our wrapper allows us to run Telegram on Debian Wheezy
By making the previous link and ensuring you have the new gnuplot version in a separate directory, more than one version can coexist peacefully. Choose which one to run by typing:
--enable-introspection \ --prefix=/usr/local/evince340 Now it is safe to build evince. Type:
~$ gnuplot ~$ gnuplot-5.0.3
Keeping those old versions we really love
Now let’s do the opposite: let’s imagine you want to keep an older version of your favourite software package because you have recently upgraded to a new GNU/Linux release that has a newer version you don't feel like using. There is a solution to this issue too: • Get the source package from the previous distro. • Install its build dependencies and any additional packages needed in your new distro. • Adjust the building process of the old package (by editing Makefiles, or whatever it takes). • Build and install the old version in a separate directory. For this last scenario, you have upgraded to Debian Jessie from Debian Wheezy, and you don’t quite agree with the new evince package. Go to a Debian Wheezy box and get the evince source package:
~$ apt-get source evince This will create the evince source tree under the evince-3.4.0 directory. Copy the entire source tree to your new Debian Jessie box. Before building the package, you have to install its dependencies. Although you will not be compiling the Debian Jessie package, we can take advantage of the aptget command to install the build dependencies for evince:
# apt-get build-dep evince Now you have to edit the debian/rules file inside the evince source tree in order to set a separate directory where all the resulting files will be copied. Get your favourite ASCII editor once again and add the –prefix flag to the DEB_ CONFIGURE_EXTRA_FLAGS variable like this:
~$ cd evince-3.4.0 ~$ vi debian/rules ...
~$ debian/rules build At some point the building process will fail due to a couple of missing packages. You can install them by issuing:
# apt-get install libgnome-keyring-dev # apt-get install gnome-doc-utils Not long after that, the building process will fail yet again while generating the HTML documentation. If you are happy without it, edit the debian/rules file and set the –enable-gtk-doc flag properly:
--enable-gtk-doc=no And that’s it! You can clean the previous building attempt and start afresh:
~$ debian/rules clean ~$ debian/rules build After a while, the building process will end up with no issues. The resulting binaries, libraries and necessary files will be placed under the debian/build/evince-gtk directory. In that same directory you will find a Makefile with the install target, so in order to install evince in its own directory you type:
~$ cd debian/build/evince-gtk # make install Finally, proceed as in our previous scenario and create a soft link that points to the new binary this way:
ln -s /usr/local/evince340/bin/evince /usr/local/bin/ evince3.4.0 So now you can easily choose between running the newer version or the older one:
evince& evince3.4.0&
www.linuxuser.co.uk
47
Tutorial
Mihalis Tsoukalos
Mihalis Tsoukalos is a UNIX administrator, a programmer (UNIX & iOS), a DBA and a mathematician. He has been using Linux since 1993. You can reach him at @ mactsouk (Twitter) and http://www. mtsoukalos.eu/
Resources A text editor such as Emacs or vi The Go compiler
Go functions
Explore Go Functions Learn how you can develop and use Go functions
Although functions look like a simple concept, Go makes them more powerful and efficient by introducing additional and unique features. Strictly speaking, a Function is an autonomous entity in a computer program that can have input, output, both of them or none of them. Function declarations in Go start with the func keyword. The single most popular Go function is main() that is declared as follows: func main() { // various Go commands }
About functions
Tutorial files available: filesilo.co.uk
48
Functions are an important element of every programming language. They allow you to break a big program into smaller, manageable parts. Functions need to be as independent from each other as possible. This means that a function must do one job and only one job. If you find yourself writing functions that do multiple things, you might consider writing multiple functions instead! Another rule of thumb is that if a function is more than 20-30 lines of Go code, you should consider breaking it into smaller functions. The last unofficial rule about function length is that you must be able to see the entire code of a function on your screen at once. These are practical rules, but you should not follow them to the letter all the time â&#x20AC;&#x201C; there are always exceptions to a rule. A side effect of having small functions is that smaller functions can be optimised more clearly because you can easily find out where the bottleneck is. The last element of a good function is a descriptive function name; therefore a function that implements integer division should not be called multiplyInts()! If you find yourself using the same functions all the time, you might consider putting them into a Go module. This has two advantages: first, you can use your functions more easily and second, in case your functions offer something truly amazing, other people can use them as well because it is easier to distribute packages.
The easy part
Just like almost every programming language Go supports functions with arguments and return values. As you can understand from the main() function, a Go function can have no input return no output â&#x20AC;&#x201C; as you are going to see, this is rarely the case! The following Go code defines a function that takes one integer variable as input and returns one integer variable as output: func oneArgument(x int) int { return x + x } Note that the variable type follows the name of the variable in the input arguments. As you can imagine, you do not have to declare a variable name for the output. If you have a function with two or more arguments of the same type, you can write it as follows: func sameType(x, y int) { fmt.Println(x + y) } Go has support for functions that return multiple values, which is handy because you do not have to create a separate structure to carry the return values and makes your code easier to read: func multipleReturn(x, y int) (int, int) { if x > y { return x, y } else { return y, x } } What the multipleReturn() function does is sort two
Figure 1
Left This is the source code of simple.go that introduces you to Go functions using simple examples
integers from the bigger to the smaller one. You can get its output as follows:
x, y := multipleReturn(5, 6) The same approach can be used for swapping two values without a temporary variable. You can find all aforementioned examples in simple.go that is also shown in Figure 1.
Naming return values
It is now time to learn the whole truth about the return values of a Go function: you can name the return values of a Go function! The following code presents a version of multipleReturn() that uses named return values:
func minMax(x, y int) (min, max int) { if x > y { min = y max = x } else { min = x max = y } return min, max } The following variation of minMax() will also work:
func namedMinMax(x, y int) (min, max int) { // The same commands as in minMax() return } The reason the previous statement works is because when you have a return statement without any arguments, Go automatically returns the named return values! The whole
The Go Profiler A Profiler is a program that allows you to analyse the behaviour of your code and find problems or logical errors. In order to enable profiling, you will need to add some extra code in your Go programs; however, we will not deal with the extra code in this tutorial. The related Go code can be found in profileMe.go. Profiling profileMe.go requires the following steps and generates an additional file that holds the data:
$ go build profileMe.go $ ls -l profileMe -rwxr-xr-x 1 mtsouk mtsouk 1829776 May 11 17:48 profileMe $ ./profileMe -cpuprofile=profileMe.prof $ ls -l profileMe.prof -rw-r--r-- 1 mtsouk mtsouk 64 May 11 18:08 profileMe. prof $ file profileMe.prof profileMe.prof: data $ go tool pprof profileMe profileMe.prof Welcome to pprof! For help, type 'help'. (pprof) The last command allows you to enter the environment of the Go profiler. You will learn more about profiling Go applications in a forthcoming tutorial that will talk about testing Go code.
idea is handy because you just have to put a â&#x20AC;&#x153;returnâ&#x20AC;? statement in your code and Go will return the appropriate variables! However, named return statements can introduce nasty bugs and damage the readability of your code so you should only use them in very short functions. Named functions return their values in the order they
www.linuxuser.co.uk
49
Tutorial
Go functions
Right Go supports named return values. When you use named return values, Go also supports a return statement without any arguments!
Figure 2
Disadvantages of functions Sometimes the last place you would search for a bug is inside a function – trust no one! Functions may increase the size of smaller programs. If the developer decides to use pointer variables to functions, they must be extra careful not to introduce nasty bugs. If your program uses many functions, you might find yourself dealing more with the functions than dealing with the actual program, so always stay focused. Apart from these minor disadvantages, your programs will only benefit from the use of functions.
This can generate nasty bugs and reduce the readability of your program. However, anonymous functions need not always to be attached to variables. In the following code, an anonymous function is declared and used in line:
for i := 0; i < 5; i++ { myDouble := func(s int) int { return s + s }(i) }(i) fmt.Println("The double of", i, "is", myDouble) }
were declared in the definition of the function. The use of named return values does not prohibit you from getting each individual return value as you did previously with multipleReturn(). You can find all aforementioned examples in named. go that is also shown in Figure 2. Run named.go and try to guess the results!
Anonymous functions
Anonymous functions are simply functions without a name. You can use anonymous functions either as parameters to other functions or as the return values of other functions. If you need a way to keep an anonymous function, you can use a variable. You can declare an anonymous function and attach it to a variable as follows:
double := func(s int) int { return s + s } Then you can use the double variable to call the anonymous function:
fmt.Println("The double of", y, "is", double(y)) The tricky thing here is that if you change the value of the double variable and point it to a different anonymous function, the double variable will have a different meaning.
50
The (i) at the end of the anonymous function is very important because “i” is the parameter of the anonymous function which means that myDouble just holds the return value of the anonymous function. The anonymous function inside the for loop cannot be used elsewhere in the program! That is why such anonymous functions are usually small in size and have a local usage. You can see the Go code of anonymous.go in Figure 3. Executing anonymous.go generates the following output:
$ go run anonymous.go The double of 0 is 0 The double of 1 is 2 The double of 2 is 4 The double of 3 is 6 The double of 4 is 8 The double of 5 is 10 The next section shows more examples of anonymous functions combined with defer.
About defer
The defer keyword defers the execution of a function until the surrounding function returns. A very common use of defer is when performing file I/O:
r, err := os.Open(aFilename) if err !=nil { panic(err) } defer r.Close() // Process aFilename using r In this case, you declare that you want to close the file assigned to the r variable when the surrounding function
ends; in the meantime, you can keep using aFilename. In other words, you cannot accidentally close aFilename and try to use it afterward, which will crash your program. This technique also puts the Close() function close to the Open() function so you cannot accidentally leave aFilename open after the function exits. However, defer can also be used in other cases. You can see the Go code of myDefer.go in Figure 4 – myDefer.go generates the next output, which is quite interesting:
$ 3 4 3
Figure 3
Left The code in anonymous. go illustrates how anonymous functions are declared and used in Go
Figure 4
Left The presented Go code illustrates the way defer works, which is relatively tricky sometimes
go run myDefer.go 2 1 0 4 4 4 2 1 0
The first output verifies that deferred functions are executed in LIFO order after the surrounding function returns! As the for loop in a() only defers a single function call that uses a given value of the “i” variable, it is logical that all numbers are printed in the reverse order. The defer call in the b() function is extremely tricky. The defer call is evaluated when the for loop ends. However, when the for loop ends, the value of the “i” variable is 4. As the anonymous function is called four times because of the for loop and the value of “i” is 4 when the for loop ends, the number 4 is printed on your screen four times! In other words, you have four function calls that use the last value of a variable because this is what is passed to the function. As far as the c() function call is concerned, its output is the same as the a() function because the function to defer takes an argument that keeps its value – as expected the numbers are printed in reverse order because of the LIFO order that deferred functions are executed. As using defer can be tricky, you should attempt to write your own examples and try to guess their output before executing the code.
Functions and errors
Go has a clever way of dealing with errors with the help of pattern matching. Functions can take advantage of this capability and help you deal with errors that might happen inside a function outside of the function. Imagine the next function declaration:
func division(x, y int) (int, error) { if y == 0 { return 0, errors.New("Cannot divide by zero!") } return x / y, nil } The division function returns an integer value and a variable of type error. In other words, a Go function can pass an error message to the main program. The main program will decide how to react to the error message based on the existing policy. The errors.New() function is used for converting a string to an error variable. You can see the Go code of errors.go in Figure 5. Executing it produces output similar to the following:
$ go run errors.go The result is 1 2016/05/12 08:30:13 Cannot divide by zero! exit status 1
Using pointer variables in functions
Go has support for pointers, which are memory addresses that offer speed in exchange for difficult-to-debug code. The following defines a function that takes a pointer as an argument:
func withPointer(x *int) { *x = *x * *x } As you can see, you do not need to return the new value – the new value is stored to the argument of the function because it is passed using a pointer!
www.linuxuser.co.uk
51
Tutorial
Go functions
Right This code demonstrates how error messages are passed from a function to the main program
Figure 5
Figure 6
Across The code in pointers.go shows how a function can get a pointer as input and how a function can return a pointer
The next code defines a function that returns a pointer to a structure named complex:
func newComplex(x, y int) *complex { return &complex{x, y} } The full Go code, which is named p2Functions.go, can be seen in Figure 6. Executing p2Functions.go creates the following output:
$ go run p2Functions.go 16 {4 5} Despite their usefulness, pointers are the origin of many problems and bugs, so only use them when necessary!
Interfaces
Interfaces are an advanced Go feature that will be explained here as gently as possible, though you may want to do some more research in order to fully understand their place here. Still, the basic information that you need to be aware of is that although Go has no direct support for objects and hierarchies, interfaces can be used for mimicking objectoriented programming. An interface is two things: a set of methods and a type. Interfaces are used for defining the behaviour of an object. Suppose that you have the following Go code that defines a new type named complex:
type complex struct { X int Y int }
52
The complex type is a structure with two integers fields. You can now create an interface that you want to be used by the complex type. The most important part of interface.go is the following because this where the interface is defined:
type complexParts interface { real() int imaginary() int } It is important to notice that the interface does not state any other specific types apart from the interface itself. On the other hand, the two functions of the interface should state their return values. The definition of the complexParts interface says that if you want to confront to the complexParts interface, you will have to implement two functions named real() and imaginary(). The main advantage of it is that if you have another type that contains a complex number in some form, you can make it confront to the complexParts interface by implementing the real() and imaginary() functions. The
Advantages of functions The biggest advantage of functions is that you do not have to write the same code many times in your programs; this improves the maintainability of your code. A side effect of maintainability is that you only have to look at one place for bugs. By contrast, if you optimise the code of a function, all optimisations are automatically propagated to the rest of the program. Additionally, functions improve the readability of a program.
Figure 7
Figure 8
Across The Go code of interface.go illustrates the use of interfaces in Go Left Go offers a tool that can help you detect unreachable code, which is a sign of a design error or a bug in your code
The “go vet” command Sometimes, your code has logical errors that are hard to find because they are valid Go code. One of these logical errors is the presence of unreachable code, which is source that can never be executed because there is no path to it. Go offers a command (go vet) that can help you detect such errors. The following Go code, included in unreachable.go, contains unreachable code:
if x == 0 { fmt.Println("x equals 0!") return } else { fmt.Println("x is not equal 0!") return } fmt.Println("Reach me if you can!") Running “go vet” against it will produce the following output: $ go vet unreachable.go unreachable.go:18: unreachable code exit status 1 You can see the pretty simplistic Go code of unreachable.go in Figure 8. As you can guess, the “go vet” command is more suitable for larger source files.
following code illustrates this by treating integer numbers as complex numbers with an imaginary part of 0:
type myInt int func (s myInt) real() int { return int(s) } func (s myInt) imaginary() int { return 0 } From now on, you can use the findComplexParts() function for both complex and myInt types! In Object Oriented
terminology, this means that both complex and myInt types are complexParts objects. However, none of them is just a complexParts object. You can see the source code of interface.go in Figure 7. Executing interface.go will generate:
{1 R: I: R: I:
2} 1 2 10 0
The more you use interfaces the more you will fully understand their usefulness and their true importance.
www.linuxuser.co.uk
53
From the makers of
Python The
Discover this exciting and versatile programming language with The Python Book. You’ll find a complete guide for new programmers, great projects designed to build your knowledge, and tips on how to use Python with the Raspberry Pi – everything you need to master Python.
Also available…
A world of content at your fingertips Whether you love gaming, history, animals, photography, Photoshop, sci-fi or anything in between, every magazine and bookazine from Imagine Publishing is packed with expert advice and fascinating facts.
BUY YOUR COPY TODAY
Print edition available at www.imagineshop.co.uk Digital edition available at www.greatdigitalmags.com
THE ESSENTIAL GUIDE FOR CODERS & MAKERS
PRACTICAL
Raspberry Pi 56
“It shouldn’t surprise anyone that the compact dimensions of our favourite Linux computer also make for a capable penetration-testing device
Contents 66
Terrarium controller
68
Locations for Raspberry Pis
70
Visualise music in Minecraft
74
Build a Ras Pi picture frame
76
Build a n Explorer robot: Part one
www.linuxuser.co.uk
55
Feature
T S E T N E P I P Y R R E B P S RA WITH
an be c t i â&#x20AC;&#x201C; s t rojec rk too p Y I D r o at f your netwo e r g t s u isnâ&#x20AC;&#x2122;t j security of i P y r r e spb to test the a R r u o Y d employe
These distros give you the opportunity to test your network and device security. Use them wisely, consider the results carefully, and make the right security decisions
56
Pen test You will need
Raspberry Pi 2 or 3 While any version of the Pi can be used for pen testing, the more processing power you’ve got, the better. But you can still pen test with an older Pi.
Wi-Fi dongle
thepihut.com £6.00 Even if you’re using a Raspberry Pi 3, you’ll need to use an external USB Wi-Fi dongle, as the on-board wireless is unsuitable for pen testing.
RAVPower rechargeable battery pack
It makes a great retro gaming machine, and you can use it to drive a robot. The Raspberry Pi has even been into space, so really, it shouldn’t surprise anyone that the compact dimensions of our favourite Linux computer also make for a capable penetration-testing device. As with the most effective Pi projects, configuring your device for pen testing is remarkably simple. Once you’ve chosen a suitable distro (see overleaf for more details), all you need to do is write it to SD card, boot your
Raspberry Pi and start evaluating the strength of your home network or desktop computer (or other hardware you have access to). With pen testing software installed on your Raspberry Pi, the options are considerable. You’ll have the tools to sniff network traffic, report on DNS information, crack ZIP archives, scan TCP ports, change MAC addresses on network interfaces, and many more besides. Read on for an idea of what you can do with these. Browsing the available tools, you’ll spot quite a few that can potentially be used for dishonest or malicious purposes. Do not do this. These distros deliver to you the opportunity to test and shore up your network and device security. Use them wisely, consider the results carefully, and make the right security decisions. And think about ethical hacking, and using these tools for good, not for evil.
www.amazon.co.uk £21 Various portable power solutions are available for the Raspberry Pi, from DIY battery packs to rechargeable battery solutions with a full day’s charge – ideal for evaluating wireless access point security!
Touchscreen Display
uk.farnell.com/ £57 You can interact with your pen-testing Pi over SSH, but for the best results a display is advisable, so you can easily launch testing tools and acquire instant reports.
www.linuxuser.co.uk
57
Feature
What is pen testing? Short for “penetration testing”, pen testing is basically assessing a computer, wireless access point, underlying network, or individual application, to see how it would handle an online attack. If vulnerabilities are present, then this would be detected by the pentesting software employed. When starting out with pen testing, you’ll find that a lot of tools are available to use. Indeed, the breadth of choice is considerable, and while there is some duplication of function, it is important not to be intimidated by the selection available. Some tools are open, naturally, although some are proprietary; the choice of which to use will be yours. Your choice will also be dictated by the package or operating system you employ. All you need for pen testing with a Raspberry Pi is a wireless dongle (the Raspberry Pi 3’s
58
If vulnerabilities are present, then this would be detected by the pen testing software employed onboard Wi-Fi isn’t up to the task), perhaps a battery pack in order to take your pen-testing tool mobile, and a suitable distro. Kali Linux is a particular favourite, as is Raspberry Pwn. Another distro, PwnPi, comes with 200 network security tools pre-installed, giving plenty of penetration-testing choice. Before proceeding, keep in mind that pen testing networks and computers that do not belong to you is illegal. Unless you’re a government agency or department, you don’t
have the authority to do this. However, these tools are available for you to employ on your own computer and network. If you have ever been concerned about the resilience of a particular app to intrusion, or about the competence of a particular firewall application, then pen testing is a good way to find the answers you want. Similarly, if you want to test your home network, your wireless network hub or mobile carrier-provided wireless access point, then pen testing is the way to evaluate your security.
Pen test
Kali Linux Built as a dedicated penetration-testing distro, Kali Linux is available for ARM devices like the Raspberry Pi, just as it is for standard x86 and x64 desktop and laptop systems. In many cases, it will be the default destination for anyone planning on a building a pen testing system, and the Raspberry Pi version (available from https://www.offensivesecurity.com/kali-linux-arm-images – ensure you download the correct version for your device), is just as capable. As you will have realised by its nature, Kali Linux comes with a vast array of penetrationtesting tools pre-installed. Based on Debian, and also suitable for computer forensics and reverse engineering, wireless penetration testing is a particular strength, and the distro includes over 300 pen testing and security auditing tools. Among these you’ll find the Metasploit Framework, Aircrack-ng and many others. Looking beyond pen testing at a more general use for Kali Linux as a security-focused distro, tools are categorised by type, with Vulnerability Analysis, Password Attacks, Stress Testing and Exploitation Tools all useful to anyone
Tool 1
interested in pen testing, along with Wireless Attacks. With all of these options available you’ll easily find the types of pen test tool you’re looking for. Unlike the other distros and packages featured here, tools on Kali Linux can be run in the GUI or in the Terminal – it depends entirely on the utility concerned, but they’re usually command line apps, or capable of working in both dimensions. But, and here’s the important thing, while Kali Linux is known as the de facto distro for security evaluation, you might not feel entirely comfortable using it on your Raspberry Pi. After all, it’s not the only option for the Pi (unlike other devices) and you may feel more comfortable with a distro that is already Pifriendly, or software that runs on Raspbian.
Key features • Over 300 pen testing tools pre-installed, meaning that you should be able to find exactly what you’re looking for. • Tools can be run in the Terminal or the GUI, making Kali Linux ideal for portable pen testing Raspberry Pis with touchscreen displays. • Tools include Metasploit, Aircrack-ng and many other popular names in pen testing. • Vast number of tools pre-installed means that they’re collected into categories, such as Vulnerability Analysis, Password Attacks, Stress Testing, etc. • Kali Linux has distros available for a wide variety of devices, from standard PCs to Chromebooks and even BeagleBone! • Doesn’t work on older Raspberry Pis
Tools on Kali Linux can be run in the GUI or in the Terminal – it depends entirely on the utility concerned www.linuxuser.co.uk
59
Feature
Rather than being a complete distro, you can install Raspberry Pwn in the Terminal
Raspberry Pwn If Kali Linux doesn’t suit your needs, or you want to go for a more Raspberry Pifocused experience, then the Raspberry Pwn collection of security and auditing tools is a strong alternative. Rather than being a complete distro, you can install Raspberry Pwn in the Terminal – useful if you don’t want to use a dedicated distro (which might get in the way of some other projects you have running). However, Raspberry Pwn is not suitable to be run on Raspbian. Instead, you’ll need to flash the full Debian or Ubuntu builds for the Pi, as there is a compatibility issue with the ARM HF architecture. To install, you’ll first need to install git, then download the installer:
sudo apt-get install git
60
Tool 2
sudo git clone https://github.com/ pwnieexpress/Raspberry-Pwn.git Once this is done, change to the Raspberry-Pwn directory and run the installer script.
cd Raspberry-Pwn sudo ./INSTALL_raspberry_pwn.sh When the installation is complete, you’ll then have a wide selection of pen testing and security audit tools at your command, such as SET, Fasttrack, kismet, aircrack-ng, tcptraceroute and many others. Once installed, your Raspberry Pwn-powered Pi will be ready for instant pen testing, giving you access to the wide selection of pre-installed tools via the command line.
Key features • Downloadable package that you can install on the Pi’s Ubuntu image. While the package can be installed on Raspbian, a few of the tools may not work without tweaking. However, if you have a current project underway, this option might be more attractive, rather than having to start from scratch again. • Installation requires git cloning, so it can be slow depending on github server load. With 200-plus tools to install, you can go for a coffee as it does its thing. • Among the many tools you’ll find SET, Fasttrack, kismet, aircrack-ng and tcptraceroute, all ideal for pen testing!
Pen test
PwnPi
Tool 3
With over 200 tools pre-installed, PwnPi is a distribution available from http://pwnpi. sourceforge.net. As a full distro, you’ll need to download the file and write it to an SD card before booting up your Raspberry Pi. Once you’ve done this, the pen testingfocused distro will provide access to its vast library of utilities, with tools such as network
is where you’ll spend most of your time with PwnPi. The login defaults for this distro are username: root, password: toor. Once you’re up and running, multiple pen testing tools can be launched, thanks to the Openbox window manager. Happily, the distro also supports a wide selection of USB wireless cards, including common devices such as the popular 802.11n
PwnPi is perhaps the most streamlined and Pi-friendly of the tools featured here traffic analyser darkstat, wireless monitoring tool kismet, and of course reaver, which we can use to launch a brute force attack on a wireless router password or PIN, and much more. Again, the tools in this collection are mostly designed for the command line, which
device from The Pi Hut, along with devices from Belkin, D-Link, TP-Link, Ralink and more. While the focus on command line tools might be disconcerting for some, PwnPi is perhaps the most streamlined and Pi-friendly of the tools featured here.
Key features • Over 200 tools pre-installed on this distribution, giving you plenty of choice of pen testing utilities. • Featured tools include network traffic analyser darkstat, wireless monitoring tool kismet, and reaver. • Dedicated Raspberry Pi pen-testing distro, based on a stripped-down Debian Wheezy distribution. • Uses Openbox window manager for multiple command line utility support. • Supports a large selection of USB wireless cards, which is particularly useful as comprehensive pen testing with the Raspberry Pi 3’s built-in card isn’t possible. • Mostly command line tools, with some desktop pen testing utilities also included for mouse-favouring ethical hackers.
www.linuxuser.co.uk
61
Feature
Test your security 03
Reaver cracked your router – what’s next?
04
Other tools for password sniffing
05
Avoid Wireshark, use tcpdump
Typically reaver will take five to ten hours to complete, and it may appear that your Pi has locked up during this. The best solution is to leave things as they are, but Ctrl+C will cancel reaver’s scan. Reaver uses a vulnerability in Wi-Fi Protected Setup, the PIN access system supported by many routers. Disabling it in the router settings will mitigate this attack vector.
01
Test your Wi-Fi router WPA password
One of the first things you can do with your Raspberry Pi pen testing suite setup is to test the password on your wireless router. To do this, we use reaver, a custom utility specifically designed to sniff out passwords on wireless routers and other access points (such as your smartphone’s wireless tethering mode). While WPA and WPA2 are rightly considered to be stronger than WEP, routers come with security holes that can be exploited with certain tools; reaver is that tool.
02
Is your WPA password crackable?
Get started by collecting information about your Pi’s wireless card with iwconfig. The default interface is wlan0, and we need to switch the card into monitor mode.
airmon-ng start wlan0 This will output the monitor mode interface name, usually mon0. It’s now time to find the BSSID of your router, so run
airodump-ng wlan0 If this doesn’t work, try airodump-ng mon0. Identify the BSSID (which looks like a MAC address) that you’re looking for, and make a note. Now you can instruct reaver to crack the password with this command, changing [BSSID] to include the BSSID you noted earlier.
reaver -i mon0 -b [BSSID] -vv The strength of your router’s password will determine how long this takes.
62
Pen testing isn’t just about cracking passwords, but several other tools are included in these packages and distros that provide further assistance in this area. So, take a look at John (aka John the Ripper) and aircrack-ng, among many others.
With its great graphical interface, you might have seen Wireshark in use elsewhere and after finding it installed in Kali Linux or one of the other tools featured here, decided to give it a go. The problem is, Wireshark is a configuration nightmare on the Raspberry Pi (alongside, for instance, Metasploit – neither seems correctly configured for reliable performance), so you need a strong alternative. Specifically, you need something that doesn’t need a load of configuration beforehand, and such a tool is tcpdump. Like Wireshark, tcpdump will display your network topology, and after running tcpdump --help to check the available commands, you should have a good idea of what results you can get. Try this to view traffic through the Ethernet port:
sudo tcpdump -i eth0
06
Where is your data headed?
One of the problems with the wide use of firewalls in PCs, routers, servers and remote computers is that while things are made more secure, and you get more control over the data that comes in, it can be difficult to establish where your data is going. To find out more, we can use the tcptraceroute tool (which has various options, which you can see with the –h switch). For instance, if we suspected data was being removed from our computer, and had established a possible IP address for the attacker, tcptraceroute shows where the data was headed. Use
tcptraceroute [HOSTNAME] …where the hostname is a URL or IP address.
Pen test
07
Does your network smell?
To establish whether or not your online activity is secure or not over wireless, you need a sniffer tool that is capable of passively picking up any data that might be available to attackers. A good choice here is dsniff, which can be run using the command
sudo dsniff -i wlan0 Again, the –i command is included before you specify the network device, in this case wlan0 for the wireless internet card. Dsniff will run, displaying any results unless you’ve specified a save file (with–w [FILENAME]). Of course, you don’t want any results here, but you can establish whether your passwords and other data can be sniffed by logging in and out of a couple of websites.
08
Map your network
It’s a good idea, when pen testing, to have an idea of the devices currently attached to your network. While interrogating the router is one option, admin consoles can be slow, and don’t tell you information about he various operating systems in use. So, an alternative is nmap, “The Network Mapper”.
09
10
Scan ports with nmap
The resilience of a computer’s network ports can also be established using nmap. All you need to find open ports is a very basic command:
nmap [TARGETNAME] Open ports will be listed, along with their types (such as TCP) and what service they are running (HTTP, MySQL, perhaps SSH). Nmap will also log the state of the ports, which might be open, closed, filtered, unfiltered, open|filtered or close|filtered. Fixing any weaknesses here means ensuring that the firewall on the target device is aware of the open ports.
11
Has your network changed?
12
Scan with EtherApe
While nmap shows you what’s happening on your network, it only provides a snapshot of the network as it is on a given date and time when you run the software. For a more reliable and organic result, EtherApe should be your choice. Rather than a one-time only report of your network topology, EtherApe is capable of scanning in real time, and reporting back on any changes to the network, as they occur. This has various uses for pen testing, not least monitoring how a controlled attack is progressing.
Scan your network with nmap Basic command formats, or syntax, for nmap is along the lines of
sudo nmap scan type options target There is flexibility here; for instance, if you wanted to find out which OS a target device is running, you would use something like
sudo nmap -O [TARGETNAME] …where [TARGETNAME] is replaced with the network device name or IP address. Scans like this can take a while to run, but when complete will feature details about the OS and any embedded systems within the device. For a hacker, this can reveal a vast tranche of security vulnerabilities. There may be a device on your network that presents an easier target than your main PC, phone or tablet. Your network is only as strong as its weakest link, and the aim is to find this and fix it.
Happily, scanning with EtherApe is relatively simple, but you’ll need to boot into the GUI to use it. All you need to do is run the tool to see the results, a diagram of your network, and the connected devices. Various views are available, and by adjusting the Preferences settings you can adjust the type of information that is reported depending on what it is that you want to see. Additionally, clicking on a network device will display a report on its IP address, the network speed, and more besides.
www.linuxuser.co.uk
63
Feature
14 13
Traffic fingerprinting with p0f
More of your network can be assessed and analysed with p0f, a network fingerprinting utility that reports on any computers on your network, their IP addresses, operating systems and web clients. In addition, p0f displays device uptime, and can report on inbound connection attempts in SYN mode. You can run p0f using the tool’s name on its own (which defaults to the primary active network connection on the Pi), or get more specific with something like this:
sudo p0f -i wlan0 -vt In the example above, the –i switch enables you to specify network device, while –v indicates verbose mode; t adds a timestamp.
64
Manipulating p0f
Other conditions you can add to the p0f scan include o to create an output file:
p0f -i wlan0 -vto output.txt Utilising passive fingerprinting, p0f captures traffic information coming from a connecting host, rather than targeting a remote host. Because of this, it is capable of returning results quickly, and without detection. For pen testing, this is very useful, and can also be used to determine the origin of an attacker.
15
Tip of the pen-testing iceberg
With over 200 pen-testing tools at your fingertips, it is unlikely that you won’t be able to find a solution to your particular ethical hacking issue. The problem is the sheer volume of what’s available, but we’ve given you a few suggestions to get you started. But remember, this is all about ethical hacking; these tools can be used for good as well as bad, so make sure you’re wearing your white hat and only testing on your own system. Responsible pen testing is just as important as getting the right results!
This is all about ethical hacking; these tools can be used for good as well as bad, so make sure you’re wearing your white hat
EXPLORE THE TECH INSIDE w w w.gad getdaily.x y z
Available from all good newsagents and supermarkets
ON SALE NOW ■ OLYMPIC READY FITNESS GEAR ■ HOT 100 ■ ESSENTIAL SUMMER KIT THE COOLEST KIT
TECH TEARDOWNS
BUYING ADVICE
SCIENCE EXPLAINED HOW DOES FOAM WORK?
HOW TO GUIDES
Nonpolar end
AIR Polar end
AIR
WATER
WATER
Without an emulsifier added to the mixture, air and water just won’t mix – there is nothing between the particles that will enable them to bind
Emulsifiers create polar and non-polar ends that attach to water and air respectively. These enable the particles to bind, which is why foams retain their shape and integrity for longer
The casing A series of valves and springs ensures the foam is released with enough force in order to keep the consistency intact
The canister Nitrous oxide is used to force air into whatever’s in the siphon – handy for creating foams of any density
£70 | $65 moleculargastronomy.com
Siphon R-Evolution A full do-it-yourself kit aimed at any would-be culinary whipper. It comes with various whipping agents, so you can start experimenting with airs and foams of your own the second it arrives.
BUY YOUR ISSUE TODAY
Print edition available at www.imagineshop.co.uk Digital edition available at www.greatdigitalmags.com 032-045 GDT004 Kitchen Tech.indd 43
22/12/2015 22:04
Available on the following platforms
facebook.com/gadgetmagdaily
twitter.com/xgadget_magazine
Mount the controller
Sensor wire
In Tom’s project, the terrarium controller is mounted to the rear of the terrarium itself and is secured with adhesivebacked Velcro. Not only does it allow easy access if a problem arises, it’s also ideally placed for all subsequent wiring to and from the terrarium
The main sensor wires run through the top of the terrarium and are coated with heat shrink tubing. Due to the number of wires involved, and the humidity from the terrarium, heat shrink tubing is the best sort of material to keep them together without damaging them
AM2302 sensor
Avoid electrocution
The first component that is wired to the Pi’s GPIO is the combined temperature and humidity sensor. While it’s predominantly cheaper than many terrarium sensors on the market, it’s just as easy to automate whenever you see fit
Using relays can be dangerous when controlling mains electricity, but by using Energenie’s Pi-mote starter kit, the action can instead be remote controlled. Various heating timers and a routine can then safely be set for the terrarium
Components list ■ AM2302 temperature and humidity sensor ■ Energenie Pi-mote starter kit ■ Male-female jumper wire ■ Solid core prototyping wire ■ Heat shrink tubing ■ Small suction cup ■ Adhesive backed Velcro
66
Right The controller is fixed to the back of the terrarium for both easy access and its convenient location for sensor wire placement Below A small suction cup helps fasten the sensor to the side of the terrarium. While industrial tape can be used, the humidity may cause it to unwrap quickly
My Pi project
Terrarium controller Keep plants or reptiles at optimal temperature with this automated terrarium controller project When did you decide you wanted to build a terrarium? I’d built my terrarium to grow tropical plants, but soon realised that it couldn’t maintain a stable temperature. The bulbs provided enough heat, and I had a cooling system for when it got too warm – the problem was, the cooling needed to run almost continuously on hot days but hardly at all on cold days. All the terrarium controllers on the market were either mechanical timers or overkill (huge units designed for greenhouses). I needed a simple controller to monitor the conditions, switch mains-powered devices on or off as required, and plot my data onto the Internet Of Things service ThingSpeak. Could you give us an insight into the development process? It took me several weekends to order all the components, write the Python scripts, and assemble a working prototype. All the required software is available via the Python Package Index or GitHub. Adafruit maintains a library for reading DHT sensors, the GPIO Zero library can be used to control Energenie mains sockets, and the Requests HTTP library can be used to push your data to the cloud. I initially wired everything together using a breadboard, duct-taped it to the terrarium, and only soldered it together after a successful trial run. It was then that I realised the same design might be useful for a snake vivarium, so I decided to write the tutorial and stick my scripts up on GitHub. Did you face any problems? For a couple of weeks the project became a bit of an obsession, and I think my wife worried I was going crazy. I’d ordered lots of the terrarium components from an online hydroponics store, and I think my neighbours suspected I was rigging up a system for growing drugs! Other than that, it was plain sailing. The main technical issue was that the transmitter for the Energenie remote
sockets is designed to sit directly on the Raspberry Pi’s GPIO, occupying almost the entire header and leaving no power pins for other components, like the temperature sensor. To avoid this problem, you can use an extra-short ribbon cable of jumper wires to connect the GPIO pins which are actually needed directly to their respective sockets on the transmitter. This way, you’ve still got room for your sensor and for any
As for controlling temperature and humidity, this can be done using any mains-controlled components. Heating pads designed for snake enclosures and cheap computer fans can be positioned and then activated/deactivated when the temperature or humidity reaches your chosen thresholds. It isn’t as difficult as you might think. Do you have any advice for anyone
I chose the Pi 3 because I wanted the flexibility to add new components other components you want to add in the future. What role does the Raspberry Pi play? It’s essentially the brains of the whole thing. In terms of the raw computational power that’s required, the Ras Pi 3 is hugely overpowered for this project. An Arduino or Raspberry Pi Zero would probably have served just as well. I chose the Pi 3 primarily because I wanted the flexibility to add new components in the future. You could build a small alphanumeric screen to display a read-out of the current conditions; install a soil sensor to monitor your plants’ moisture levels; design a web interface or app to control your mains sockets remotely; or wire a spare GPIO pin to an ultra-bright LED mounted in the lid for a night-time moonlight effect. How was i implement ing things like temperature and humidity control? The AM2302 temperature and humidity sensor – which is a wired version of the DHT22 – has been great so far. It costs less than £10, provides accurate readings, and uses minimal power. I’ve heard that prolonged exposure to the high moisture levels common to many terrariums and amphibian enclosures may eventually reduce the accuracy of the humidity reading, but it’s been fine.
interested in recreating your project or working on something similar? I’d say it’s a great project for anyone who owns a terrarium or reptile enclosure. It should also be a good first serious project for children, especially given the number of ways it can be extended. I certainly learnt a lot while developing it. The basic design could be adapted for any situation where you want to activate or deactivate mains-powered devices depending on ambient conditions. There are also many things you could do with the data being pushed to the cloud – ThingSpeak offers a range of MATLAB visualisations and if-this-then-that style actions. There are countless possibilities for what can be done here. Have you got any other Raspberry Pi projects that you are currently working on? What do you think of it? I love working with the Raspberry Pi. It’s a great device, and it has an incredibly active and helpful community. My next project is a remote-controlled cat feeder. Most of the cat feeders on the market are flimsy – no match for my two cats, who are fat and strong – and based on timers. I want to build a device that dispenses a portion of food based on a signal from my phone. The goal is for cat owners to be able to feed their pets remotely when they’re working late or away travelling for a day.
Tom Bennet by day is an SEO consultant based in London, but in his spare time he loves pushing the boundaries of the Raspberry Pi.
Like it?
Tom’s project is fairly unique, but it’s very much possible to build your own Raspberry Pi heating monitor. While the finished product doesn’t include the complexity of the terrarium controller, it certainly gets the job done. Head across to http://bit. ly/1g3AhR6 for the full tutorial.
Further reading
Tom has gone to the trouble of putting a detailed tutorial of the project on his blog over at www. bennet.org for anyone interested in building their own controller. You can also find all the necessary code available at his GitHub page: https://github.com/ tombennet
www.linuxuser.co.uk
67
Python column
Locations for Raspberry Pis This issue, we’ll look at how to use a GPS to tell your Raspberry Pi where it is
Joey Bernard
Joey Bernard is a true Renaissance man, splitting his time between building furniture, helping researchers with scientific computing problems and writing Android apps
In previous articles, we have looked at how to add various types of sensors to the Raspberry Pi to get it to interact with the world around it. One very useful piece of information that your Raspberry Pi could collect is its location on the Earth. It may be very useful to know where you are in order to decide on what action to take. This issue, we will look at adding the functionality of GPS (Global Positioning System) to your Raspberry Pi and how to use it within your own Python code. For those of you who may not have run into GPS before, it is a constellation of 31 satellites orbiting the Earth. Using extremely accurate time systems and some very complicated maths, these satellites communicate with groundbased receivers and let them know where on the planet’s surface they happen to be located. In this article, we will be assuming that you are using either one of the official Raspberry Pi GPS modules or using a standard GPS unit that connects over USB.
One thing to keep in mind is that sometimes things will stop working. If this does occur, you may need to restart the GPS daemon, with the commands
record = gpsd.next() sudo killall gpsd sudo gpsd /dev/ttyUSB0 -F /var/run/ gpsd.sock where the TTY device may be different for your particular situation. You can use the utility ‘lsusb’ to get a full listing of all of the devices that might be connected to your Raspberry Pi over USB. For the rest of this article, we will just assume that the GPS is available at ‘/dev/ttyUSB0’. At this point, you could simply write code to read data directly from the GPS by querying the file ‘/dev/ttyUSB0’. This would be very messy, however. Luckily, the daemon gpsd provides a translation layer that handles the reading of the raw data and formats it into a standard form that can be used within your own applications. The first step is to create a connection to the gpsd daemon so that
These satellites communicate with ground-based receivers These types of GPS connections can be communicated with after you install the correct packages. If you are using Raspbian, you can install the needed packages with the command
you can talk to your GPS. The basic code looks like
sudo apt-get install gpsd gpsdclients python-gps
By default, this will connect to the gpsd daemon on your local machine (the Raspberry Pi) on port 2947. There are the parameters ‘host’ and ‘port’ where you can change these if you have a unique setup. There are also the parameters ‘verbose’ and ‘mode’ that can control how this connection behaves. There are also instance methods for the newly created gps object that allow you to alter this behaviour. For example, you can use
The first package installs a daemon program that allows you to make a connection with your GPS over a network port. The second package installs client programs that you can use to verify that your GPS is working correctly and the Raspberry Pi can communicate with it. Verify that everything is working correctly by using one of the supplied utilities like
cgps -s
68
The most basic thing you can do now is simply to grab the next available data packet with
import gps gpsd = gps.gps()
gpsd.stream(gps.WATCH_ENABLE | gps. WATCH_NEWSTYLE)
You will likely want to check what class this message is in order to decide what to do with it. The first class we will look at is the TPV (Time-Position-Velocity) message. This type of message is what most people think of when they think of GPS. But this is not the only type of information that you can get. There is also a class of message called SKY, which provides a sky view of the satellites visible to your GPS receiver. Part of this message includes a list of satellite objects that contains the coordinates for each of the visible satellites. There are also ATT and GST messages which contain error and status messages, in case you are interested in digging into some of the details available. With the first class of messages, the TPV messages, there is a lot of information available. You can get a really accurate time source by looking at the time attribute of the message contents. You can get this with the code
if record[‘class’] == ‘TPV”: if hasattr(record, ‘time’): print(record.time) From this code, you can see that the returned record is actually a dictionary object. Depending on the exact details of your particular GPS device, it may not return all of the possible attributes for a TPV object. This is why you should use the Python built-in ‘hasattr()’ to check to see if the return object has the particular attribute that you want to use. You can get the current location of the GPS receiver, in terms of the latitude, longitude and altitude, with the code
latitude = record.lat longitude = record.lon altitude = record.alt Do not, however, do it this way. We have removed any checks to see whether
Python column
But, where are we? these attributes exist or not. Do not be as lazy as we’re being when writing your own code! The latitude and longitude are given in degrees, where +/- represent North/South and East/West, while the altitude is given in metres. You can check to see whether these are present or not by checking the NMEA (National Marine Electronics Association) mode of the returned data. A mode of 0 means it has not been set yet and a mode of 1 means that there is no fix. Once you have a fix, a mode of 2 means that you have two-dimensional data and a mode of 3 means that you have threedimensional data. If you are moving, you can get other interesting bits of information. The TPV class object has the attributes of track, speed and climb. The track is current bearing in degrees from true north, the speed is your current rate of travel in metres per second and the climb gives you the rate of climb (positive) or sink (negative) in metres per second. All of these values also have error estimates included in other attributes. The other major message class, SKY, provides you with information on the satellites visible above your head. You need to verify the message class with
if report[‘class’] == ‘SKY’: # Then look at the data One of the attributes within this type of message is a list of satellite objects that contain the details of each satellites visible. The identifier for the satellite is in the ‘PRN’ attribute, and whether it was used or not in calculating your current position is stored in the attribute ‘used’ as a Boolean flag. The position of the satellite is given relative to your position. The attributes ‘az’ and ‘el’ give the values of the azimuth and elevation from your current location. The last attribute you get is the signal strength (in the attribute ‘ss’), given in dB. Now that we have a connection to the GPS receiver and can talk to it, what could you do with it? If you wanted to build a tracker, you could build a loop that writes the current position out to a file. In the code
below, we have assumed that there is a file open, labelled as ‘file1’. Then, you could use
while True: record = gpsd.next() if record[‘class’] == ‘TPV’: file1.write(‘lat: ‘ + record[‘lat’] + ‘ lon: ‘ + record[‘lat’])
While the GPS receiver connected to your Raspberry Pi can give you your exact coordinates, most people cannot process latitudes and longitudes in any intuitive way. Luckily, there are several services available on the internet, called geolocation services, that can help translate these coordinates to another form that is more intuitive. You will need to install the relevant Python module on Raspbian, with the command sudo apt-get install python-geopy
This code example uses the dictionary form of the returned data. This is essentially the JSON form of the parsed data. You can read up more about this JSON data at the URL http://www.catb. org/gpsd/gpsd_json.html, the website for the gpsd daemon that we are using to communicate with the GPS receiver. If you wanted to use the example project given at the beginning of the article, you could keep checking to see whether you are in a particular location with code like
mylat = 46.498293369 mylon = 7.567411672 while True: record = gpsd.next() if record[‘class’] == ‘TPV’: if record[‘lat’] == mylat and record[‘lon’] == mylon: # Do something amazing When you are done, you should remember to clean up all of the connections that you made earlier in your code. The gps object has a method available to shut down this connection. It looks like
gpsd.close() This shuts down the socket connection to the gpsd daemon that is running on your Raspberry Pi. Now, you should be able to add geolocation to your projects without too much trouble. This means you can build really cool projects, such as a locked box that only opens if you happen to be in a particular location or a tracking device that records where your car has been travelling. Be sure to share anything interesting you may discover with the wider world so that we can all build on each other’s ideas.
The core part of the geopy module is the concept of a geocoder. There are prebuilt subclasses for services like Google, Yahoo and Bing. You can use whichever is your favourite one. In the following examples, we will use the Nominatim geocoder for the OpenStreetMap geolocation service. To start with, you can create a geocoder with from geopy.geocoders import Nominatim geolocator = Nominatim() Normally, most people use this to find the coordinates of an address. We will actually use it to do a reverse look-up to find out the address, if one exists, for the coordinates that the GPS has given you. You can do this with code like location = geolocator.reverse((mylat, mylon)) You can give the coordinates as either a list of values, as above, or as a single string with the actual values. The returned location object contains the given latitude and longitude, but also contains an address attribute as a Unicode string. If you want to dig in to the returned details, you also have access to the complete returned JSON object as the attribute named ‘raw’. For example, you could grab the country portion of the address with location.raw[‘address’][‘country’] In our case – in this instance – we get Canada back as our current country. There are also utility functions available to do manipulations with this geographical data. For example, you can find the distance between two locations with code like from geopy import distance location2 = geolocator(“41.49008, -71.312796”) mydist = distance.distance(location.point, location2.point) There are lots of other tools available, too, allowing you to play with this data.
www.linuxuser.co.uk
69
Tutorial
Visualise music in Minecraft with the PianoHAT
Combine code, Minecraft and the PianoHAT to play music and create a visualisation of the melody
Dan Aldred
Dan is a Raspberry Pi Certified Educator and a lead school teacher for CAS. He is passionate about creating projects and uses projects like this to engage the students that he teaches. He led the winning team of the Astro Pi Secondary School Contest and his students’ code is currently being run aboard the ISS. Recently he appeared in the DfE’s ‘inspiring teacher’ TV advert.
Pimoroni has created the PianoHAT, the ultimate mini musical companion for your Raspberry Pi! It is inspired by Zachary Igielman’s PiPiano and made with his blessing. The HAT consists of a dinky eight-key piano add-on, with touchsensitive keys and LEDs. It can be used for many creative and musical purposes, such as playing music in Python, controlling software synths on your Raspberry Pi, taking control of hardware synthesisers, or unlocking your inner Mozart. This tutorial will show you how to set up the hardware, introduce you to the basic features of the software and show you how to combine these together in Minecraft to create musical blocks and a visualisation of your melodies. You can view a demonstration video here: https://www.youtube. com/watch?v=ezJgXp01MPk
01
Getting started
Pimoroni has made it extremely easy to install the software for your PianoHAT. Assuming you have not connected your HAT, simply attach the board and boot up your Raspberry Pi. Load the LX Terminal and update the software; type:
$ sudo apt-get update $ sudo apt-get upgrade
What you’ll need ■ PianoHAT ■ Raspberry Pi
70
Type the following line to install the PianoHat libraries:
$ sudo curl -sSL get.pimoroni.com/pianohat | bash Follow the instructions displayed. This will now download the required libraries and a selection of programs to try.
Minecraft 02
Basic events
The software install comes with a set of four example programs to get you started and demonstrate the features and functions of the PianoHAT. In terms of the code for the Piano, there are four basic events that you can control, these are: on_note – triggers when a piano key is touched and plays a note. on_octave_up – triggers when the Octave Up key is touched and raises the notes by one octave. on_octave_down – triggers when the Octave Down key is touched and decreases the notes by one octave. on_instrument – triggers when the Instrument key is touched and changes the sound from a piano to drums.
No ID
Full code listing import import import import import import import
pianohat pygame time signal glob os re
Not all the notes on the piano correspond to a block ID and therefore some notes will not place a block. This will create a gap in the musical block line.
from mcpi import minecraft mc = minecraft.Minecraft.create() global move x,y,z = mc.player.getTilePos() print x,y,z move = x BANK FILETYPES samples files octave octaves
03
Simple Piano
To get used to the PianoHAT and its features, load the simplepiano program. This is exactly as the name describes: a simple piano. Navigate to the folder home/pi/Pimoroni/pianohat, and press F4 to start a Terminal session (The HAT requires root access and this method provides it). Next, load the piano program, type sudo python simple-piano.py and press Enter. Wait for the program to run and then play yourself a little tune. Use the Octave buttons to move the note range higher or lower, and press the Instrument button to toggle between drums and piano.
04
Musical silence
05
Teach yourself to play
The program called leds.py is useful; when you run this and press a key or note, the corresponding LED is lit up but the note is not ‘sounded’. It demonstrates how you can separately control the PianoHAT LEDs and the sounds. You can turn all of the LEDs on or off, which is useful for creating a visual metronome, prompting a user which key to press next before a sound is played. Assuming you are still in the home/pi/Pimoroni/ pianohat folder, type sudo python leds.py to run the program.
This neat little program teaches you to play a well known melody (can you guess what it is?). Run the program and the LED for each required note is lit up, indicating that this is the key to press. Press the key and the note is sounded. Once you have done this the next LED lights up; press this key and the note plays, and so on. Follow the LEDs to learn how to play the melody. You can use this program to experiment and create your own melody / song trainer.
06
Minecraft
The new Raspberry Pi OS image comes with Minecraft and the required Python library pre-installed. If you are using an old OS version, it will be worth downloading and updating to either the new Jessie or Raspbian image downloadable here: https:// www.raspberrypi.org/downloads/ Go to the start menus and load Minecraft from the programming tabs. Be aware that the Minecraft window is a little
= = = = = =
‘./sounds/’ [‘*.wav’,’*.ogg’] [] [] 0 0
pygame.mixer.pre_init(44100, -16, 1, 512) pygame.mixer.init() pygame.mixer.set_num_channels(32) patches = glob.glob(os.path.join(BANK,’*’)) patch_index = 0 if len(patches) == 0: exit(‘You need some patches in {}’.format(BANK)) def natural_sort_key(s, _nsre=re.compile(‘([0-9]+)’)): return [int(text) if text.isdigit() else text.lower() for text in re.split(_nsre, s)] def load_samples(patch): global samples, files, octaves, octave files = [] print(‘Loading Samples from: {}’.format(patch)) for filetype in FILETYPES: files.extend(glob.glob(os.path.join(patch,filetype))) files.sort(key=natural_sort_key) octaves = len(files) / 12 samples = [pygame.mixer.Sound(sample) for sample in files] octave = octaves/2 pianohat.auto_leds(True) def handle_note(channel, pressed): global move channel = channel + (12*octave) if channel < len(samples) and pressed: print(‘Playing Sound: {}’.format(files[channel])) print channel ### Saves the channel number / note as a variable to compare to block Block_number = channel samples[channel].play(loops=0) ###Sets block infront of you### mc.setBlock(move, y+3, z+3, Block_number) move = move + 1 ###add one to the x pos to move blocks along in a line def handle_instrument(channel, pressed):
www.linuxuser.co.uk
71
Tutorial while True: time.sleep(1.0) pos = mc.player.getPos() print pos.x, pos.y, pos.z
glitchy when full size and it is recommended to reduce the size so you can view both your Python code and the game at the same time. Let’s look at some simple Minecraft hacks that will be used in the final Musical Blocks program.
07
Importing the modules
Load up your preferred Python editor and start a new window. You need to import the following module using from mcpi import minecraft and mc = minecraft.Minecraft.create(). These create the program link between Minecraft and Python. The mc variable enables you to type ‘mc’ instead of having to type out the long-winded minecraft.Minecraft.create() each time you want to use an API feature. Next import the time module to add a small delay when the code runs.
from mcpi import minecraft mc = minecraft.Minecraft.create() import time
08
Below Pimoroni’s PianoHAT has a dinky little eight-key keyboard. The octaves can be lowered and raised
72
Finding your location
When playing Minecraft you inhabit a three dimensional environment which is measured by the ‘x’ axis, left and right, the ‘y’ axis up and down and the ‘z’ axis for forward and backwards. As you move along any of these axes, your position is displayed at the top left of the screen as a set of three co-ordinates. These are extremely useful for checking where the player is and can be collected and stored using pos = mc.player.getPos(). This code returns the position of your player and is applied later to the music blocks. Try the simple program below for an example of how the positioning works:
from mcpi import minecraft mc = minecraft.Minecraft.create() import time
09
Grow some flowers
Each block in Minecraft has its own ID number, for example, flowers have the ID number 38. The code x, y, z = mc.player.getPos() gets the player’s current position in the world and returns it as a set of co-ordinates: x, y, z. Now you know where you are standing in the world, blocks can be placed using mc.setBlock(x, y, z, flower). Use the code below to place flowers as you walk around the world. Try changing the ID number to place a different block.
flower = 38 while True: x, y, z = mc.player.getPos() mc.setBlock(x, y, z, flower) time.sleep(0.1)
10
Creating musical blocks
Now you are au fait with the basics of Minecraft and the PianoHAT, let’s combine them to create a musical block. This uses the ID of each note in the PianoHAT and assigns it to each individual block. For example, the block ID 2 is grass and this corresponds to the note value of C. As you play the piano, the relevant block is displayed in the Minecraft world. Open the LX Terminal and type sudo idle to open Python with root privileges. Click file open and locate the simple-piano program, then open it and save it as a different name. You will use this as a template for the musical block program. Now import the modules and Minecraft API starting on line 11 of the program.
Minecraft import mcpi.minecraft as minecraft mc = minecraft.Minecraft.create()
11
Finding your positon again
Under the line you just entered and before the line that begins “BANK”, line 19, create a global variable called move; this stores the ‘x’ position of the player. Now find your player’s position, line two, using the code you learnt in step 8. On line three, print the position – this is useful for checking that the position and block are functioning correctly. These values are printed to the Python console window. Now you have the position of your player in the Minecraft world.
global move x,y,z = mc.player.getTilePos() print x,y,z move = x
12
Assign a note to a block
Next scroll down to the handle-note function, this begins on line 52 of the final program. After the function name, on the next line, add the global move variable from the previous step. This is the ‘x’ position of the player. The next line reads channel = channel + (12*octave): ‘channel’ refers to the number of the note. Move to the If under this line and create a new variable called Block_number which will store the channel number, the number of the note to be played.
def handle_note(channel, pressed): global move channel = channel + (12*octave) Block_number = channel
13
Set the block
In step nine you learned how to place a block: use this code to place the block that corresponds to the channel number you stored in the previous step. Within the if statement on line 56 under the samples[channel].play(loops=0), add the code to place a block, mc.setBlock(move, y+3, z+3, Block_ number) This places the block into the Minecraft world.
if channel < len(samples) and pressed: print(‘Playing Sound: {}’.format(files[channel])) print channel samples[channel].play(loops=0) ###Sets block in front of you### mc.setBlock(move, y+3, z+3, Block_number)
14
The block explained
In the previous step you used the code mc.setBlock(move, y+3, z+3, Block_number) to play a note and place the block. This is achieved by saving the note number, for example, note five, into a variable called Block_number. When the program is run, the code finds your x positon and saves this in a variable called move. This is combined with the set Block code to place the block at your x position. In order for you to view the musical blocks, each block is moved across three and forward three spaces from your original starting position.
15
Moving the block line forward
Once the block is placed, increment the x position by one; this has the effect of moving the next block forward one space. As you play the notes on the Piano, a line of corresponding blocks is built, creating a simple graphical visualisation of the melody
you are playing. You will notice that some of the blocks appear to be missing – one of the causes is that there is no block ID number which matches the note ID number. The second reason for a space is that some of the materials are affected by gravity. For example, Sand, Water and Mushrooms all fall down from the line leaving an empty space. Under the line mc.setBlock(move, y+3, z+3, Block_number), line 64, add the code, move = move + 1.
mc.setBlock(move, y+3, z+3, Block_number) move = move + 1
16
Posting a message to the MC World
The last step is to post a message to the Minecraft world to tell the player that the Piano and musical blocks are ready. On line 86 add the code mc.postToChat(“Welcome to musical blocks”). When you run your program you will see the message pop up at the bottom of the world. Try changing your message or use the same code-line to add other messages throughout the game. Once the message is displayed the samples have been loaded and your Minecraft Piano is ready.
Make your own sounds The piano samples are located and stored in the Pimoroni/pianohat/ sounds folder. Create your own sounds such as you singing the note or playing it on another instrument and you can create your own personalised piano synth.
mc.postToChat(“Welcome to the music blocks”)
17
Running the music block
Now that you have completed the code save it. Open Minecraft and create a new world. When this has finished loading, press F5 in IDLE to run your program. Press a key on the piano and look out for the block appearing just above your head. Remember that as the player’s position is measured only once at the beginning of the program, the blocks will always be placed from the same starting reference position. Play your melody to create a musical visualisation.
Full code listing (cont.) global patch_index if pressed: patch_index += 1 patch_index %= len(patches) print(‘Selecting Patch: {}’.format(patches[patch_ index])) load_samples(patches[patch_index]) def handle_octave_up(channel, pressed): global octave if pressed and octave < octaves: octave += 1 print(‘Selected Octave: {}’.format(octave)) def handle_octave_down(channel, pressed): global octave if pressed and octave > 0: octave -= 1 print(‘Selected Octave: {}’.format(octave)) mc.postToChat(“Welcome to music”) pianohat.on_note(handle_note) pianohat.on_octave_up(handle_octave_up) pianohat.on_octave_down(handle_octave_down) pianohat.on_instrument(handle_instrument) load_samples(patches[patch_index]) signal.pause()
www.linuxuser.co.uk
73
Tutorial
Make a Ras Pi-powered digital picture frame Use a Raspberry Pi to create your own fully configurable digital picture frame, complete with touchscreen display Christian Cawley
Christian is a former IT and software support engineer and since 2010 has provided advice and inspiration to computer and mobile users online and in print. He covers the Ras Pi and Linux at makeuseof.com.
Grab photos from the web You shouldn’t feel restricted by the photos you can display on your Raspberry Pi picture frame. Various options exist for you to pull images down from popular web services, such as Flickr or Facebook. To do this, you’ll need a dedicated Python script, which has happily already been written for you. Head to Samuel Clay’s Github to download them, but don’t add the scripts to rc.local until you’ve confirmed they work.
74
Digital picture frames that displayed a selection of your favourite photos were quite popular for a time, but are now seemingly available only as free gifts when signing up to magazine subscriptions. These tablet-like devices often made for interesting talking points, but were often let down by low memory, a poor user interface, or both. We don’t have to worry about either of those problems with this project. Here we are going to set up a Raspberry Pi with some photo-displaying software, connect a touchscreen display, place it in a suitable stand, and sit back to enjoy the results. Better still, with this set-up, we’ll be able to pull images from a range of online and offline sources, giving us some great variety.
What you’ll need ■ Raspberry Pi 2 or 3 with Raspbian Jessie ■ Wi-Fi dongle (if using Raspberry Pi 2)
■ Display (official 7-inch Touchscreen Display recommended)
■ Phillips screwdriver ■ Frame and/or stand (suitable
options for the official touchscreen display are available at Pimoroni and other dedicated Pi hardware and accessory suppliers)
■ Scripts from https://github.com/ samuelclay/Raspberry-Pi-PhotoFrame
01
Prepare your Pi
02
Connect the display
You’ll save a lot of time with this project if you ensure that wireless networking is set up, and SSH is enabled. Do both via the Raspbian Jessie desktop – you’ll find the new Raspberry Pi Configuration utility in Menu > Preferences, where you can enable SSH in the Interfaces tab.
With a towel on your desk to avoid scratches, connect the Raspberry Pi to the back of the 7” Touchscreen Display, making sure that the cables are connected correctly. (Older releases require you to also connect and mount the display board). Secure with screws, and then mount in stand.
Picture frame Left The Raspberry Pi is connected to the back of the touchscreen display ready to power this project
03
Prepare your photos
Naturally, you’ll need a collection of photos to display on the Pi-powered digital picture frame. We have different options here (see boxout) but recommend you start with photos stored on your Pi, ones that have been copied via USB, via a network drive, or downloaded through your browser.
04
sudo nano /home/pi/start-picture-frame.sh Add the following:
#!/bin/bash DISPLAY=:0.0 XAUTHORITY=/home/pi/.Xauthority /usr/bin/ feh -q -p -Z -F -R 60 -Y -D 15.0 /media/STORAGE/test
Set up your Pi picture frame
To configure the Raspberry Pi as a picture frame, we first need to prevent the screen from switching off. This means editing the lightdm.conf file.
sudo nano /etc/lightdm/lightdm.conf Add this line under [SeatDefaults]:
Exit and save, then test with:
bash /home/pi/start-picture-frame.sh
07
Make picture frame run at boot The script can now be set to run at boot. Open
sudo nano /etc/rc.local xserver-command=X -s 0 -dpms …and before exit 0, add: When done, save and exit with Ctrl+X, then reboot:
sudo reboot
05
Install feh
Image viewing software feh is the best option for building a simple Pi picture frame, so install this.
sudo apt-get install feh Once installed, instruct feh where to find the images, changing /media/STORAGE/test with your directory path.
DISPLAY=:0.0 XAUTHORITY=/home/pi/.Xauthority /usr/ bin/feh -q -p -Z -F -R 60 -Y -D 15.0 /media/STORAGE/ test
06
sleep 10 su - pi -c ‘/bin/bash /home/pi/start-picture-frame. sh &’
Save the script
The previous script should have prompted the Raspberry Pi picture frame to begin displaying images from the specified folder, for 15 seconds each. To force this to start at boot, we need to add it into a script. Create this with:
Make your picture frame smart
Save and exit.
08
Reboot your Pi and test
09
Stop the picture frame
Now you’re pretty much done! To test this, use the usual sudo reboot command to restart the Raspberry Pi and check that the device boots straight into picture frame mode. Any problems, check your commands have been entered correctly, and in the right places, in lightdm.conf and rc.local.
Should you need to stop the picture frame software at any time, this can be done with a simple command:
sudo pkill feh As long as the images you prepared for your picture frame don’t take up too much space on the disk, they should load up without any problems.
The same collection of scripts also features a file called pir.py, which, when used with a motion detection module, will turn the display on whenever someone passes the picture frame. A suitable infrared motion sensor module will set you back a couple of pounds, and should be connected to the Raspberry Pi’s GPIO. Connect the pin labelled VCC to the 5V pin on your Pi, GND to GND, and Out to GPIO 4.
www.linuxuser.co.uk
75
Tutorial
76
Explorer robot
Build an explorer pHAT robot Part one
Make your own autonomous Raspberry Pi-powered robot using a Pi Zero, Zumo chassis, and Pimoroni Explorer pHAT Welcome to part one of a special robotics series. We will equip you with everything you need to know to make your own Raspberry Pi robot. In this first part we break down a robot into all its component parts and give you an insider’s perspective on batteries, motors, input controllers and more.
Alex Ellis
@alexellisuk is a senior software engineer at ADP, a Docker Captain and an enthusiast for all things Linux. He is never far from a Raspberry Pi and regularly posts tutorials on his blog.
What exactly is a ‘robot’?
A robot is considered to be any machine that can perform a task or series of tasks, either autonomously once programmed or with direct instruction. Robots come in many shapes and sizes – they can be used for manufacturing, for search and rescue, and for just having fun and seeing what you can create. The hobbyist creations we saw at Pi Wars recently were wheeled or tracked, disconnected from any wires, and capable of being driven with a gamepad. We’re not going to be too strict about what constitutes a robot; we’d like to leave that up to you.
The chassis
The chassis is at the heart of the robot and its size and shape dictates everything else that you add to it. Choose carefully between big or small, strong or light, heavy or nimble. If you are going autonomous then you will need a base with plenty of space for expansion. At Pi Wars we saw RC cars repurposed with brand new electronics; this can be a cheap way
of getting a great finish. High-end kits such as the Dagu Wild Thumper are made from steel and have enough power to pull a chair across a room. DIY platforms are the most flexible and rewarding, though; we saw laser-cut wood, acrylic and even 3D-printed parts from the Hitchin Hackspace team.
www.linuxuser.co.uk
77
Tutorial
PARTS LIST We sourced most of these components from Pimoroni for around £35. We assume you already have your Raspberry Pi Zero, microSD card and few AA batteries in the back of a drawer.
Motor controllers are driven with a PWM (Pulse Width Modulation) signal generated by the Pi or an add-on board. It’s a good idea to buy at least one or two spare motors for each project – and to be kind to them because they can break when misused. The Zumo chassis uses micro metal-geared motors from Pimoroni.
• £15.00 Polulu Zumo chassis • £10.00 Explorer pHAT • £10.00 2x Metal-gear motors • £2.00 Battery UBEC (eBay)
The motor controller Power source
It is best to run your motors and control circuits from two separate power sources. This helps isolate your Pi, motor controllers and sensors from noise and potential surges. For the experienced user, LiPo batteries are recommended, but we must warn you that all safety notices must be read and followed thoroughly before charging or handling them. Safer alternatives exist in USB power banks, AA batteries and NiCD 9V batteries.
Get your motor running
Brushless motors for RC cars typically run at high speeds and can make your robot great at travelling in a straight line, but hard to control over obstacle courses and for autonomous navigation. Brushed motors can be driven with a motor driver, but you must refer to the stall current of the motor when choosing the board.
78
The type of motor and its peak current draw will dictate your choice of motor controller. For small motors, a cheap chip like the L298N (£2) may well suit your needs. PiBorg produces a much more efficient board which can drive large currents, but the cost will increase up to around £30.
Understanding gear ratios
Gears are expressed as the ratio of input speed to output speed. For instance, 35:1 means the motor shaft (driver) turns 35 times as fast as the driven shaft (where the wheel is mounted). The higher the number, the slower the driven shaft will turn, but with a higher torque (twisting force). With the Dagu Wild Thumper 4WD robot (available at robosavvy.com) a ratio of 34:1 gives a top speed of 4.5mph with a stall current of 5Kg.cm; in contrast, a ratio of 75:1 has a slower top speed of 2mph but has a much higher torque at 11Kg.cm. For more detailed info on gears, check out this great slide deck from Bowles Physics: http://bowlesphysics.com/ images/Robotics_-_Gears_and_Gear_Ratios.pdf
Controller/input options
Most robots will need manual input through a joystick or gamepad. We’ve had good success with genuine Wiimotes and PS3 controllers which use Bluetooth. The Xbox 360 controller also works well but adds a large dongle to the Pi and draws more current than a Bluetooth adapter. Separate your code into different classes so that you can swap controllers in the future.
Explorer robot
Cameras, GPS, compasses and gyros
LED outputs
When starting and stopping your robot, or just turning the motor speed up or down, it’s important to have visual feedback. The simplest way to do achieve this is through LEDs plugged directly into the GPIO headers of your Pi with an appropriately-rated resistor. For our first robot, a flashing sequence was performed when the Bluetooth service had started. We added separate LEDs for when the movement speed was turned up and down. If you are creative or just have a lot of space on the robot’s top level you can attach an LCD display at the same time as everything else for communicating data from the sensors.
Starting out with sensors
There are many types of sensors available and each has its own protocol or wiring scheme. We would suggest keeping to well-known sensors that are easy to connect. The Parallax ultra-sonic Ping sensor gives the distance to the nearest object in centimetres, helping you avoid crashing into walls. A pair of infrared line sensors can be used to follow a dark line – you could use black insulation tape on the kitchen floor.
A camera can be added to the Raspberry Pi Zero v1.3 for around £4 for the cable and £10-22 for a camera. This means that as you are navigating around the kitchen or the garden you can take pictures or stream live video back to your control station (laptop). For something more advanced, a Raspberry Pi robot can be made to follow a series of GPS waypoints with a USB or serial GPS module and a digital compass. You can even build two-wheeled balancing robots by using a tiny gyroscope board such as the 9-DOF from Adafruit.
Autonomous control
There are two main types of autonomous control programs for a robot: open and closed loop. In an open loop, a robot will power its motors at a constant speed regardless of the environment, meaning a small turn on carpet will translate to a 360-degree manoeuvre on tile flooring. Generally, an open loop is easier to start with because it has absolutely no feedback involved. More advanced robots need to use feedback from sensors and inputs to adjust their speed or movements. An example is a robot on the flat versus one going uphill; a closed loop involves a wheel encoder so that instead of travelling at 50 per cent voltage we can task the robot with moving at a minimum of 100rpm.
NEXT Join us next issue where we will guide you as we construct a robot using the Zumo chassis. We will put the tracks and wheels onto the chassis and solder wires to the motors and battery contacts. In parts three and four we will explore manual control of the robot and autonomous navigation through sensing the real world.
www.linuxuser.co.uk
79
Special offer for readers in North America
6 issues FREE When you subscribe
FREE
resource downloads T O in every OisB R R E R e u O s L is P th issue AN EX4-part guide starts ILD
The open source authority for professionals and developers
U e complete B Part 1 of th
user.co.uk www.linux
ER LINUX US E 167 PER ISSU & DEVELO
Y KIT SECURTIT ING DVD LIVE-BOO
UR SECURE YO
tion Penetraw h testing rrity Pi Raspbe Pi into theol Turn yourse curity to ultimate
RS E HACKE BEAT TH
E R U C SE
SYSTEM
ew Compile n software thout SS wi Get new FO ur distro updating yo
24
of pages y Pi
err Raspb
M E T S Y S Y O UR
need grams you C ro p ll fu 7 The wnt •ychokruoor tkPit • Wireshark • nmap k do to locnS SH • Snor
user.co.uk www.linux
7 70 16 N 2041-32 ISSUEISS
AL.indd
WEEK 4 FIN
to visua Use the Piin Minecraft music
ol Take contr of Baesshhell
w to us time Learn hots ve scrip to sa
pe • IPFire • O • Kali Linux
ilding twork by bu across a ne age device or es st fil n ve ow er S ising your and optim test 41 9 7720
al Play music ft Minecralise
£5.99
327002
67 >
sided AlsOUoTEin 2 explaine frame sPi picturefunctions VPNseobenst network »»IPR Ra a e ak M Go work with What’s th cy needs? for your priva
» How to
Black r HAT HmayocurkPi3and
Get more fro this add-on HATs with
6 15:52
14/06/201
1
Order hotline +44 (0)1795 418661 Online at www.imaginesubs.co.uk/lud *Terms and conditions This is a US subscription offer. You will actually be charged £80 sterling for an annual subscription.
This is equivalent to $120 at the time of writing – exchange rate may vary. 6 free issues refers to the USA newsstand price of $16.99 for 13 issues being $220.87, compared with $120 for a subscription. Your subscription starts from the next available issue and will run for 13 issues. This offer expires 30 September 2016.
Quote
USA
for this exclusive offer!
81 Group test | 86 Black HAT Hack3r | 88 Xubuntu 16.04 | 90 Free software
PrivateTunnel
IPVanish
TunnelBear
CyberGhost
GROUP TEST
VPN (Virtual Private Networks) Virtual Private Networks are great for keeping your anonymity online, but which one should Linux users be downloading to their desktop?
PrivateTunnel
IPVanish
TunnelBear
While it isn’t a big name compared to some of the programs covered in this group test, PrivateTunnel boasts a VPN service based around security. Keeping your anonymity is one thing, but PrivateTunnel looks to equip its users with an array of added security options. www.privatetunnel.com/home
IPVanish looks to turn the traditional VPN formula on its head, with a combination of zero trace logs and IP masking proving to be valuable assets. On paper, IPVanish certainly looks like a winner, but can it compete with the others in this test? www.ipvanish.com
CyberGhost
Simplicity is key when it comes to TunnelBear’s VPN service and you’ll be hard pushed to find a more userfriendly VPN on the market that helps you anonymously browse the internet. That said, its simplicity does mean many of its rivals outshine it in terms of features. www.tunnelbear.com
With over 8 million downloads, CyberGhost has crafted quite a reputation. It’s another VPN service that prides itself on providing a safe browsing experience for its users to enjoy, but includes a ridiculous amount of options for users to try out and use on their connection. http://www.cyberghostvpn.com/
www.linuxuser.co.uk
81
Review
Virtual Private Networks
IPVanish
PrivateTunnel
Fast, secure and uncomplicated – is PrivateTunnel the complete package?
A relatively simple solution for complete internet privacy in seconds
n Users are able to keep track of their remaining data allowance once connected and purchase more
n Servers are constantly changing through IPVanish, so expect updates to be applied regularly
Setting up a VPN
Setting up a VPN
PrivateTunnel can boast one of the best user interfaces around, with all options easily accessible in one place. Extra options can be toggled on, but are easy enough to bypass, if necessary. For an instant VPN on basic settings, it takes just a click of your mouse to get connected, but we’d recommend taking your time to familiarise yourself with what’s available here.
This is another VPN service that prides itself on its ease of use, but it’s a little simplistic. Server details don’t carry the same level of information as we wanted to see, especially when compared to the others in the test. But if you’re new to using a VPN, IPVanish offers a credible help section to get you up to speed.
Connection options
Connectiom options
It’s particularly handy that PrivateTunnel will automatically look to connect you to the fastest server possible, and for the most part, its default choices work just fine. Advanced users do have the choice to manually select a server, but while the list is vast, it can take a little while to refresh. Either way, connecting to a VPN is quick and requires little in manual input.
Server choices are fairly large, but as we alluded to earlier, elements of key information are missing in the listings. That aside, getting connected can be as easy, or as complicated as you see fit, with a number of customisable connection settings on hand if you know your way around a VPN service. Just make sure you read up on them first.
Browsing speeds
Browsing speeds
We tried an array of servers through PrivateTunnel’s service and found the best were achieved through its default offerings. Low latency throughout was a particular highlight, and not something that many of its competitors can boast. It’s also one of the better services when it comes to dealing with both video and Flash-based content.
Security features
Perhaps IPVanish’s strongest asset is the browsing speeds available through its servers. It does a great job at bypassing speed-throttling set by your ISP, and even masks your usage so they can’t trace it. Some of the best servers are heavily populated, so it’s worth exploring everything that’s available to find optimal speeds for your desktop.
Security features
There’s everything from a built-in ad blocker to the option to prevent online tracking. While we recommend having all of its security features enabled, there’s scope to completely customise how these features work with each other. It’s the sort of control you wouldn’t expect from a VPN service, and PrivateTunnel can certainly claim to stand out.
IPVanish’s built-in ‘encryption tunnel’ works like a charm when it comes to using public Wi-Fi, helping mask mobile banking and other personal data you may have entered. It doesn’t, however, go to the same lengths of alerting you potentially harmful sites, a common feature in others in this group.
Overall
Overall
We’re finding it hard to find fault in PrivateTunnel. When it comes to providing an all-in-one VPN experience, there’s very few services that can match it. Despite a few glitches, this is a fantastic VPN.
82
9
While most of IPVanish’s core features work well, it’s let down by a few small issues. A lack of server details is a hefty omission, but decent connection options and browsing speeds make it worth considering.
7
TunnelBear
CyberGhost
The browser-friendly VPN service offers more than meets the eye
Make yourself invisible online with the power of CyberGhost
n CyberGhost includes an array of simulated countries, which include both free and premium servers for you to connect to
n Using the toggle option provided, users can connect to a default VPN in a matter of seconds
Setting up a VPN
It’s as simple as turning on a switch, no really. TunnelBear’s simplistic UI includes little more than a switch, which contains an automatic VPN connection using default settings. Additional customisations to improve your VPN are possible, but are minimal when compared to both CyberGhost and PrivateTunnel.
Connection options
Setting up a VPN
Opening up CyberGhost for the first time shows all of its core features in one place, which can be a little daunting. It does, however, showcase everything you could possibly need for setting up your VPN, with each tool and option having a handy explanation to explain what it does. Of course, users can connect to a VPN with default settings in a few seconds.
Connection options
Choices are relatively small in this department and the server list is too barebones for our liking. Finding a workable server is entirely possible, but there’s little you can do if a server goes down. We like TunnelBear’s lightweight approach, but there’s a lot missing throughout and advanced users won’t find everything they want here.
Each part of the VPN process can be controlled from top to bottom, allowing complete control of how you want your VPN to be set up. Some of the nittygritty elements are a little bit superfluous, but will undoubtedly be of use to more advanced users, and it’s the level of choice that some will be expecting. Choice of servers is decent, but lacking behind PrivateTunnel.
Browsing speeds
Browsing speeds
If you manage to find a server that works well for you, you’ll be in for a treat. TunnelBear’s servers are up there with the best in this group in terms of keeping optimal speeds, and are especially useful when it comes to streaming online video. Similarly to IPVanish, it does a great job at bypassing any sort of throttling from your ISP.
Security features
We found speeds to be a bit hit and miss, with some noticeable latency at times, despite there being a great number of servers in our vicinity. However, when we found the ideal server, we can’t fault the browsing experience we had, we only wish it was a little easier to find a good server. With plenty of servers available, it’s worth experimenting to see which works out best for you.
Security features
All the core security features you’d expect from a VPN service are included here and we couldn’t really find any fault with any of them. Its AES-256 encryption will keep you safe online and as well as blocking outside tracking, TunnelBear don’t keep logs on your network activity either. We only wish we could have more control over securing the VPN.
Detail, detail, detail. Similarly to most other areas of CyberGhost, users have plenty of options at hand to make sure their connection remains as secure and stable as possible. Beginners will find the ad-blocking feature a particular benefit to have built-in, but the range of choices for advanced users is particularly pleasing.
Overall
Overall
Both some impressive security features and browsing speeds make TunnelBear one of the best for those wanting something simple, but it’s too basic for those who really like to take control of their VPN.
7
There’s a couple of small omissions that need to be looked at, but for sheer depth of choice, there’s very few VPN services that can match CyberGhost’s feature set.
8
www.linuxuser.co.uk
83
Review
Virtual Private Networks
In brief: compare and contrast our verdicts PrivateTunnel Setting up a VPN
Plenty of options are readily on hand, but a default VPN is one click away
Connection options
PrivateTunnel will automatically search out the fastest servers, which is a big help
Browsing speeds
Low latency was a particular highlight across all the servers that we tested
Security features
There’s enough scope here to provide you with everything you need to stay safe online
Overall
An easy-to-use interface plus great connectivity and security features
IPVanish
8
A well-designed UI can’t mask the lack of choice when it comes to customisation
9
Connections can be tailored to some degree, but not enough for advanced users
9
No issues here, with the servers we tried all working at optimal speeds
8
IP masking is a highlight and there’s a good range of other features on-board
9
Speedy and offers good security options, but not customisable enough for us
TunnelBear
6
For a default VPN, all it takes is a flick of a switch. Perfect for beginner users.
7
Very little of your connection can be customised, which isn’t great for users
9
Speeds are generally good, but expect some congestion on the sought after servers
8
Everything included works well, but again, there’s just enough useful elements here
7
Good for VPN newcomers, but server congestion can be an issue
CyberGhost
8
Every part of your VPN can be altered from top to bottom, depending on what you need
9
5
Lots to take advantage of, but we did find one or two features to be pointless additions
8
7
Arguably CyberGhost’s weakest areas are its hit and miss servers that can drop on occasion
6
7
For advanced users, there’s an abundance of options here to keep you secure
9
7
A plethora of customisation options, but let down by server performance
8
AND THE WINNER IS… PrivateTunnel
When it comes to keeping your anonymity online, users have a lot of VPN services they can choose from. Both PrivateTunnel and CyberGhost proved to be the best of the group we tested, but it’s the former that we’ve chosen as our winner here. At its core, PrivateTunnel offers one of the best VPN-based browsing experiences out there, while still packing in an array of options for advanced users to really get the maximum use from it. But that’s not forgetting those who may want to use a VPN for the first time, as PrivateTunnel still makes it relatively simple to get a secure VPN in a matter of seconds. While it shines in its browsing experience, we also take our hats off to its suite of security options that are ready to help you. Of course, it provides complete anonymity, but it also uses an advanced encryption process to make sure anything and everything you do over public Wi-Fi cannot be chased. You’ll take advantage of this feature more times than you may think. Obviously such a superb feature set isn’t free, but PrivateTunnels offers a fair pricing package, depending on your usage. Packages are sold on a per-gigabyte basis, with a 50GB allowance
84
n Switch security protocols to alter how the VPN ultimately keeps you safe online
costing just shy of £10/$12. But if you don’t want to dive straight into a paid subscription, then take advantage of the free trial offered on the PrivateTunnel website. We would argue that advanced users should certainly check out CyberGhost, due
to the sheer scale of options it offers, but as a package, PrivateTunnel has to be the winner here. It doesn’t over-complicate things and manages to turn what’s traditionally a tricky subject into something a bit more manageable. Oliver Hill
Classified Advertising 01202 586442
HAPPY BIRTHDAY
SCAN YOUR TREAT Celebrating
TWENTY YEARS
of Hosting Come Celebrate with us and scan the QR Code to grab
your birthday treat!
0800 808 5450
Domains : Hosting - Cloud - Servers
e d a M
in
e
th
K U
Pi-DAC+
IQaudIO Audiophile accessories for the Raspberry Pi
• Raspberry Pi HAT, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments PCM5122 • Variable output to 2.1v RMS • Headphone Amplifier / 3.5mm socket • Out-of-the-box Raspbian support • Integrated hardware volume control • Access to Raspberry Pi GPIO • Connect to your own Hi-Fi's line-in/aux • Industry standard Phono (RCA) sockets • Supports the Pi-AMP+
Pi-AMP+
• Pi-DAC+ accessory, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments TPA3118 • Up to 2x35w of stereo amplification • Provides power to the Raspberry Pi • Software mute on GPIO22 • Auto-Mute when using Pi-DAC+ headphones • Input voltage 12-19v • Supports speakers from 4-8ohm
Pi-DigiAMP+
• Raspberry Pi HAT, no soldering required • Full-HD Audio (up to 24bit/192MHz) • Texas Instruments TAS5756M • Up to 2x35w of stereo amplification • Out-of-the-box Raspbian support • Integrated hardware volume control • Provides power to the Raspberry Pi • Software mute on GPIO22 • I/O (i2c, 3v, 5v, 0v, GPIO22/23/24/25) • Just add speakers for a complete Hi-Fi • Input voltage 12-19v • Supports speakers from 4-8ohm
PiMusicBox
Twitter: @IQ_audio Email: info@iqaudio.com
WWW.IQAUDIO.COM
IQaudio Limited, Swindon, Wiltshire. Company No.: 9461908
Review
Black HAT Hack3r
MINI PC
Pimoroni Black HAT Hack3r and Mini Black HAT Hack3r Size (Full)
65mm x 105mm x 11.2mm
Weight (Full) 31g (excluding cable and mounting hardware)
Size (Mini) 65mm x 59mm x 11.2mm
Weight (Mini) 20g (excluding cable and mounting hardware)
Price (Full) £10
Price (Mini) £4 PCB only, £8 kit, £10 fully-assembled
Available From pimoroni.com
86
If you need to diagnose problems with a Raspberry Pi HAT, you’ll find these boards invaluable The ‘Hardware Attached on Top’ (HAT) standard for the Raspberry Pi made it easier than ever to make use of the board’s general-purpose inputoutput (GPIO) header. It has also, sadly, been the leading cause of pin wastage: HAT boards, by their very nature, take up the entire 40-pin header of the Pi to which they are attached, even if they need only one or two pins to function, and few offer breakout connectivity for the unused pins. That’s where the Black HAT Hack3r family from Sheffield-based Pimoroni comes in. Available in traditional full-size and new Mini variants, the Black HAT Hack3r boards are as simple as they come: there are no active electronics present at all. Instead, what you get is an easy way to mirror the GPIO header to two breakout pin sections – roughly equivalent to the port expanders of the 8-bit computer era, minus
the ability to switch between multiple inputs. Using either Black HAT Hack3r board is simple: connect the included 40-pin ribbon cable, the same type used to connect old-fashioned IDE hard drives to their controllers, to the Pi’s GPIO header, then to the first row of pins on the top of the Black HAT Hack3r. You can then attach your existing HAT device to one of the two mirrored headers – the lower header for the full-size model, while the Mini variant supports the Pi Zero-style mini-HATs on its middle header – and still have the full complement of 40 pins accessible for additional devices. The caveat, of course, is that with no active hardware on board not all the pins will be available. This is both a blessing and a curse: if you have two devices which need to use the same GPIO pins at the same time, the Black HAT Hack3r boards won’t help
Pros Gain access to otherwise wasted GPIO pins, move HAT hardware elsewhere in a project, debug communications
A Black HAT Hack3r board allows the user to make use of pins unused by a HAT without having to modify the HAT board directly in the least; having the pins mirrored, though, makes it possible to listen in on what a given board is doing by connecting jumper wires to the mirrored pins – a valuable tool for diagnosing problems and even reverse-engineering HAT designs. The boards have greater purpose than simply allowing for advanced debugging, though. When building a more complex project, a Black HAT Hack3r board allows the user to make use of pins unused by a HAT without having to modify the HAT board directly. At its simplest, the boards can be used to simply detach a HAT from the Raspberry Pi itself and route it elsewhere in an enclosure using the bundled cable or a longer equivalent. The Mini Black HAT Hack3r is the most beginnerfriendly, as it’s available in bare board, solder-ityourself kit, and fully assembled formats; the fullsize variant, meanwhile, is exclusively available as a solder-it-yourself kit.
If you’re dealing with full-size HATs, the full-size board provides mounting hardware to ensure they don’t come loose; the Mini variant does the same for the smaller Pi Zero HAT devices known as pHATs. While it’s possible to use full-size HATs on the Mini and vice-versa, it’s not recommended: pHATs installed on the full-size Black HAT Hack3r wobble, while HATs on the Mini Black HAT Hack3r must be installed on the lower of the three GPIO headers and thus cover the otherwise useful pin-out reference labelling. There’s really little to complain about with the Black Hat Hack3r boards: the PCBs themselves are attractive, well-made, and clearly labelled, as with all Pimoroni products, while the bundles – bar the PCB-only Mini variant – even include adhesive feet. Perhaps the only issue we encountered was with the push-pin mounting hardware, which proved tricky to remove once installed in a HAT. Gareth Halfacree
Cons Push-pin mounting hardware
difficult to remove, cable trails over the top of the Pi blocking access to CSI and DSI ports
Summary Not everyone with a Pi will benefit from a Black HAT Hack3r. If you use only one HAT board, or don’t use a HAT at all, it will serve little purpose. For anyone looking to expand the capabilities of a HAT, build their own, or debug problems with GPIO communications, though, the Black HAT Hack3r family is a must-have.
8
www.linuxuser.co.uk
87
Review
Xubuntu 16.04
DISTRO
Xubuntu 16.04
Xubuntu has had a great track history, but this latest update misses the mark
RAM 512mb
Storage 6GB
Specs Based on Ubuntu PAE-compatible processor
88
Very few distros can claim the longevity that Xubuntu has managed to achieve in its years of pleasing users all around the world. While it has never really been touted as one of the premier distros out there, it can certainly boast a fairly hefty user base. That said, the past few updates havenâ&#x20AC;&#x2122;t necessarily been kind to the distro, with buggy software, bare desktops and problematic installations being relatively common problems brought up in many forums.
The recent 16.04 looks to have addressed a lot of these problems on paper, but in use thereâ&#x20AC;&#x2122;s still some work to be done. Installation is one of the highlights of Xubuntu in the latest update, with everything from Secure Boot to creating partitions not taking overly long. Installation as a whole took just under 20 minutes, which is relatively quick for distros nowadays. However, booting up Xubuntu 16.04 for the first time soon highlights some of the issues that we noticed in previous updates.
Xubuntu is built for its speed, and moving around the desktop is lightning quick The desktop is very much on the bland side, and while it’s easy enough to navigate, we would’ve liked to have seen a bit of flair to help it stand out from the crowd. The same can also be said with Xubuntu’s choice of default applications, which when compared to the choice we had back in the 15.10 update, have been largely stripped back. You’ll find all the usual suspects; Firefox, LibreOffice and Parole to name just a few. But for others like VLC and GIMP, you’ll need to go ahead and install them yourselves. One of the more pleasing aspects of our time with Xubuntu 16.04 was its relatively low resource usage. Despite it not necessarily being marketed as a lightweight distribution, at our peak usage we found only 400MB of our memory being used at once. There’s also minimal strain on the CPU, which is a nice touch. What this means for users is that Xubuntu is built for its speed, and moving around the desktop is lightning quick. As long as you’ve the hardware to match, it also helps with media playback and we had no trouble watching our favourite movies. But with every positive, we always seem to find another issue. In this instance it was with the package management front-end. While many of us are accustomed to the Ubuntu Software Centre, Xubuntu has instead implemented GNOME Software. It’s surprisingly difficult to find half the packages you may actually need to install, and we found there was little
notice if anything required an update. The whole idea of ‘If it isn’t broken, don’t fix it’ certainly comes to mind here. Perhaps a lot of the issues aren’t critical, but they’re fairly common throughout Xubuntu 16.04. For one, we had repeated issues with our Wi-Fi network consistently being dropped, and in another instance we found our mouse pointer disappearing sporadically. But the most annoying of the bunch was its hit-and-miss support for hardware. Smartphones were easily detected, but we seemed to have problems when connecting speakers and headphones. It required a lot of disconnecting and connecting to get everything working exactly how it should be. It’s these small issues that really do count against this latest update, especially when there’s so much strong competition coming from both new and established distributions. Of course, bugs like this will be removed in later updates, but should they be there in the first place? It’s especially disappointing, considering that this is a public release. We can’t help but feel a little disappointed by what we experienced with Xubuntu 16.04. By no means is it a complete wreck, but there are plenty of small, annoying issues that need to be ironed out. It’s worth checking out if your hardware requires a low resource distribution, or if you want something you can install in minutes, but most should hold off for now. Oliver Hill
Pros An intuitive installation system is
a highlight, but we especially love its minimal impact on our RAM and CPU
Cons Lots of niggling issues that hinder the user experience, and poor package management options
Summary We’ve no doubt that Xubuntu will be quick off the mark to eradicate many of the bugs that users are facing with this update. Regardless, we’re still questioning some of the choices that have been made here, and in its current state, you’re better off looking elsewhere.
5
www.linuxuser.co.uk
89
Review
Free software
WEB SERVER
Hiawatha 10.2 A lightweight web server with security in mind Since Apache stopped being the automatic first choice for web servers – it’s still the best in all sorts of situations, but leaner rivals can be a good choice when speed is needed – we’ve had to get used to Nginx and Lighttpd; but there’s another option. Hiawatha has been around for some years, and its combination of security, speed and simplicity has been winning us over. Hiawatha “can stop SQL injections, XSS and CSRF attacks and exploit attempts”, with features such as maximum CGI run time and client banning – features often bolted onto apps at considerable cost to system resource use. It is a good fit for Drupal, WordPress, and other PHP applications for which – whether through an extensive plug-in ecosystem or otherwise – vulnerabilities are not unknown. When Hugo Leisink started the project nearly 15 years ago, it was needed to run on old hardware, so it zips along quite happily today on the Raspberry Pi. Installation is trivial, and configuration straightforward. The HOWTOs on the website walk you through a number of use cases, but for the simplest just reading through the config files and man pages will get you up and running.
Above Easy to install, simple to configure, and secure in use – Hiawatha is a great little web server
Pros Cons Secure, simple, and fast. Uses If you’re already managing Polar SSL; new support for Let’s Encrypt; plus a security audited codebase.
hundreds of Apache and Nginx boxes, another server to learn is bad news.
Great for…
Securely running PHP (and Python and Perl) Web apps hiawatha-webserver.org
DOWNTIME MONITOR
downtimed 1.0
How long was your server or VPS down? Ask downtimed If there’s one thing we love, it’s little utilities – especially command line ones – that provide missing pieces of functionality for your system, and do it well. downtimed keeps a record of all system downtime, providing all or the most recent to you with the commands downtimes and downtime – a counterpart to the uptime command. You’ve probably already got downtimed 0.6.x in your distro’s repository. This 1.0 release changes little, rather it is a reflection of the app’s stability – and the fact it won’t need much changing, save when systems change. downtimed runs on GNU/Linux and GNU/ Hurd, various BSDs, and most flavours of Solaris, as well as MacOS X. These flavours of Unix all have
90
slightly different ways of recording start-up (with downtimed recording its own daemon start time when it cannot understand the OS kernel’s method), and within GNU/Linux distributions start up systems for daemons vary. Nevertheless, downtimed works across systems, and on your GNU/Linux distro will keep a good record of interruptions, recording whether the system was shutdown cleanly or just died, and giving a close estimate of shutdown time in the latter case, from its constant taking of timestamps. It’s always been easy to see system uptime on *nix, but given the kind of hosting many of us use, looking into downtime is just as important. It’s simple, but that’s why we love it.
Pros Simply does what it says; keeping a record of downtime periods to query from command line.
Cons Not as accurate on unplanned
shutdowns; affected by changing system time during start-up.
Great for…
Analysis of why your VPS keeps going offline dist.epipe.com/downtimed
KEYBOARD SCREENCASTER
Screenkey 0.9
Better screencasts, through displaying your keys as they’re typed If you’ve ever tried a basic screencast of any command line tools, or of programming in your favourite editor, you may have gotten frustrated after enlarging the font for readability, then running out of space for your displayed code. Screenkey lets you keep your editor or terminal at a more natural size, and pops up a magnified bar displaying everything you type: this is great not just for screencasts, but in the class or lecture room, or for informal demonstrations at meetings and code-ups. Written in Python 2.7 (Python 3 support is on the way), and running with or without system installation, Screenkey’s dependencies are not too onerous. Most can be installed through the package manager –
including Awesome fonts for display of multimedia key symbols – except slop (SeLect OPeration – used for interactive positioning), which may take you a couple of configure and make runs to figure out its dependencies in turn, but even this is soon sorted, and you’re ready to type ./screenkey and try it out. If you’ve a taskbar, screenkey will sit there waiting to be activated. It will work on tiling window managers, using command line flags to alter the settings. The default font is fine for some uses, but you’ll want to change to a monospace font for coding. Other settings include placement, persistence (which defaults to 2.5 seconds after you stop typing), and the effect of backspace and modifier keys.
Pros Flexible display of what you’re
typing, or of what’s composed by your key combo.
Cons Even when you’ve sorted out which mode to use, not all shortcuts are translated.
Great for…
Screencasting or demonstrating anything keyboard-driven http://bit.ly/1tkydxB
TURN-BASED STRATEGY
Battle For Wesnoth 1. 1 2.6 Still a jewel amongst free software games Some useful improvements and bug fixes in an update to the stable branch, remind us it’s been some time since we had a good go at the turn-based war game, Battle For Wesnoth. It’s been around for more than a decade, and has always been there as a standout example of the quality of some free and open source software, but many of us will not have played for a while, and will have missed the accumulation of incremental improvements to campaigns, artwork, music, etc. Installation of the latest stable or developmental releases is usually possible from binary packages, as they make their way through the repositories particularly quickly. We used the Debian Unstable package. If you’re new to Wesnoth, the tutorial will soon get you immersed in this world of fighting factions and balanced gameplay. From its filmic soundtrack, through good quality art, to well-realised gameplay and campaigns, Wesnoth remains an amazing piece of work from such a small team of developers (relative to the numbers that work on proprietary games), and deserves not to be taken for granted. Update to the latest version and have a play through, then maybe head to the website and see if you can help out anywhere.
Above Work your way through the many campaigns, then you’ll still have unofficial add-ons to look forward to
Pros Possibly the best balance in
gameplay between relatively simple and fully immersive; music that doesn’t jar.
Cons Not for the casual gamer
dipping in for a spare five minutes, yet still a little too basic for hardcore gamers.
Great for…
Strategic gaming that won’t take over your life! wesnoth.org
www.linuxuser.co.uk
91
OpenSource
Get your listing in our directory To advertise here, contact Luke
luke.biddiscombe@imagine-publishing.co.uk | +44 (0)1202586431
RECOMMENDED
Hosting listings Featured host: www.cyberhostpro.com 0845 527 9345
About us
Cyber Host Pro are committed to provide the best cloud server hosting in the UK; we are obsessed with automation and have been since our doors opened 15 years ago! We’ve grown year on year and love our solid growing customer base who trust us to keep their business’s cloud online!
What we offer
• Cloud VPS Servers – scalable cloud servers with optional Cpanel or Plesk control panel. • Reseller Hosting – sell web and email hosting to your clients; both Windows and Linux hosting available. • Dedicated Servers – having your own
If you’re looking for a hosting provider who will provide you with the quality you need to help your business grow then contact us to see how we can help you and your business! We’ve got a vast range of hosting solutions including reseller hosting and server products for all business sizes.
dedicated server will give you maximum performance; our UK servers typically include same-day activation. • Website Hosting – all of our web hosting plans host on 2015/16 SSD Dell servers giving you the fastest hosting available!
Testimonials
5 Tips from the pros
01
Optimise your website images When uploading your website to the internet, make sure all of your images are optimised for websites! Try using jpegmini. com software, or if using Wordpress install the EWWW Image Optimizer plugin.
02
Host your website in the UK Make sure your website is hosted in the UK, not just for legal reasons! If your server is overseas you may be missing out on search engine rankings on google.co.uk – you can check where your site is on www. check-host.net.
03
Do you make regular backups? How would it affect your business if you lost your website today? It is important to always make your own backups; even if your host offers you a backup solution
92
Having your own dedicated server will give you maximum performance; our UK servers typically include same-day activation
it’s important to take responsibility for your own data.
04
Trying to rank on Google? Google made some changes in 2015. If you’re struggling to rank on Google, make sure that your website is mobile-responsive! Plus, Google now prefers secure (https) websites! Contact your host to set up and force https on your website.
05
Avoid cheap hosting We’re sure you’ve seen those TV adverts for domain and hosting for £1! Think about the logic... for £1, how many clients will be jam-packed onto that server? Surely they would use cheap £20 drives rather than £1k+ enterprise SSDs! Try to remember that you do get what you pay for!
Chris Michael “I’ve been using Cyber Host Pro to host various servers for the last 12 years. The customer support is excellent, they are very reliable and great value for money! I highly recommend them.” Glen Wheeler “I am a website developer, I signed up with Cyber Host Pro 12 years ago as a small reseller, 12 years later I have multiple dedicated and cloud servers with Cyber Host Pro, their technical support is excellent and I typically get 99.9-100% uptime each month” Paul Cunningham “Me and my business partner have previously had a reseller account with Cyber Host Pro for 5 years, we’ve now outgrown our reseller plan, Cyber Host Pro migrated us to our own cloud server without any downtime to our clients! The support provided to us is excellent, a typical ticket is replied to within 5-10 minutes! ”
Supreme hosting
SSD Web hosting
www.cwcs.co.uk 0800 1 777 000
www.bargainhost.co.uk 0843 289 2681
CWCS Managed Hosting is the UK’s leading hosting specialist. They offer a fully comprehensive range of hosting products, services and support. Their highly trained staff are not only hosting experts, they’re also committed to delivering a great customer experience and passionate about what they do.
Since 2001 Bargain Host have campaigned to offer the lowest possible priced hosting in the UK. They have achieved this goal successfully and built up a large client database which includes many repeat customers. They have also won several awards for providing an outstanding hosting service.
• Colocation hosting • VPS • 100% Network uptime
Value hosting elastichosts.co.uk 02071 838250 ElasticHosts offers simple, flexible and cost-effective cloud services with high performance, availability and scalability for businesses worldwide. Their team of engineers provide excellent support around the clock over the phone, email and ticketing system.
Enterprise hosting: www.netcetera.co.uk | 0800 808 5450 Formed in 1996, Netcetera is one of Europe’s leading web hosting service providers, with customers in over 75 countries worldwide. As the premier provider of data centre colocation, cloud hosting, dedicated servers and managed web hosting services in the UK, Netcetera offers an array of
services to effectively manage IT infrastructures. A state-of-the-art data centre enables Netcetera to offer your business enterpriselevel solutions. • Managed and cloud hosting • Data centre colocation • Dedicated servers
• Cloud servers on any OS • Linux OS containers • World-class 24/7 support
Small business host www.hostpapa.co.uk 0800 051 7126 HostPapa is an award-winning web hosting service and a leader in green hosting. They offer one of the most fully featured hosting packages on the market, along with 24/7 customer support, learning resources, as well as outstanding reliability. • Website builder • Budget prices • Unlimited databases
• Shared hosting • Cloud servers • Domain names
Value Linux hosting patchman-hosting.co.uk 01642 424 237 Linux hosting is a great solution for home users, business users and web designers looking for cost-effective and powerful hosting. Whether you are building a single-page portfolio, or you are running a database-driven ecommerce website, there is a Linux hosting solution for you. • Student hosting deals • Site designer • Domain names
Quality VPS Hosting:
Fast, reliable hosting
www.BHost.net | sales@BHost.net BHost specialises in one thing and doing it right – Linux Virtual Private Servers (VPS). We don’t sell extras like domain names or SSL certificates, we are simply dedicated to providing you with the highest quality VPS service with exceptional uptime and competitive pricing. Our customer focus means our team always goes above and beyond to help.
Our platform successfully hosts a whole variety of services including: game servers, office file storage servers, email, and DNS servers plus many thousands of websites. • Linux VPS hosting • Unlimited Bandwidth • Run your favourite Linux distro • 100% satisfaction or a full refund!
www.bytemark.co.uk 01904 890 890 Founded in 2002, Bytemark are “the UK experts in cloud & dedicated hosting”. Their manifesto includes in-house expertise, transparent pricing, free software support, keeping promises made by support staff and top-quality hosting hardware at fair prices. • Managed hosting • UK cloud hosting • Linux hosting
www.linuxuser.co.uk
93
OpenSource
Your source of Linux news & views
Contact us…
linuxuser@imagine-publishing.co.uk
COMMENT
Your letters
Questions and opinions about the mag, Linux, and open source
Slacking off
Dear LU&D, At work my team and I use Slack to co-ordinate and send each other information. I find it really useful when I need to contact colleagues who work in other parts of the company and it’s particularly good for sharing notes when we’re debugging. In my spare time I’ve started contributing to an open source project and, thinking of how useful me and my team had found it, I suggested to the other contributors that maybe we should use it – we’re spread out all over the world in different countries and timezones, and we currently use a mailing list and an IRC channel, so it’s easy for essential info to slip through the gaps. However, one of the other contributors pointed out that Slack is a closed-source tool, which goes against everything our open source project stands for, and refused outright to even consider it. Are they right, and is there any open-source alternative to Slack I could suggest instead? Sterling Maddison
The question about whether your fellow contributor is ‘right’ or not depends very much on your personal views about open source – some people feel more strongly about it than others and endeavour to ensure that all of the programs they use are completely open source, others are quite happy combining their love of Linux with using an iPhone. Whatever you decide on, though, you’ll be pleased to note that there is an open-source alternative to Slack. In fact there are a couple, but our favourite is Rocket.Chat ( https://rocket.chat). With an API that will happily integrate with GitHub, drag-and-drop filesharing and video conferencing it’s a great developer tool, and you can even use its helpdesk facility to connect with end-users of your open source project. The built-in link preview allows you to view content direct from a link, so if you need to unwind by spamming around a few cat gifs, it’s got you covered. Just maybe don’t tell that particular fellow contributor that it’s also available for Windows, Android, Macs and iDevices as well as Linux…
Windows woes
Dear team, A friend put Ubuntu on our family computer when my daughter started studying GCSE Electronics and her teacher introduced her to the Raspberry Pi. It’s set up so that when the computer turns on she can hit a button and choose Ubuntu rather than Windows. The other week ago Windows decided to update itself to Windows 10. Now Ubuntu and all the work she’s done with her Raspberry Pi in it seems to have vanished. Gary Casey Don’t worry Gary – it hasn’t gone, Windows has just hidden it by hiding the bootloader that allowed you to choose between the two operating systems. This is easy enough to fix using the Grub bootloader (which is what Windows has hidden from you). The first thing you need to do is get your hands on a live-booting flash drive or disc of Ubuntu. Issue 165 of Linux User & Developer came with that very thing on
Above Rocket.Chat is an open source alternative to Slack that allows teams to communicate effectively and has a host of built-in features
94
Twitter:
@linuxusermag
the cover, but you can also make your own by downloading the ISO file of Ubuntu from the website (http://www.ubuntu.com/download/ desktop) and following the instructions on how to make a live-booting disc or flash drive (http://www.ubuntu.com/download/desktop/ try-ubuntu-before-you-install). Load the disc or flash drive up, reboot your computer and choose Ubuntu. Once it’s booted, open the terminal, then type sudo grub-install /dev/sda This should fix the problem for you, but if your computer has more than one hard drive or partition and you’re not sure which one Ubuntu is on, run sudo grub And then find /boot/grub/stage1 The computer will then tell you where the Grub bootloader is. Assuming Windows is on the first partition, it’s likely to be on the second, so the computer will say something along the lines of
(hd0,1). Take the location it’s given you and tell it to look there: root (hd0,1) setup (hd0) quit And this should fix the problem.
Disc delight
Hi guys, Living in Canada makes finding current Linux materials rather difficult. Yes, we have bookstores like Chapters/Indigo who sell outdated tomes on Linux (they still have a Red Hat 9 book gathering dust on the shelf!). However, if it were not for the fact that Linux User & Developer is (sometimes) sold in my local Chapters for $25 per issue I’d be hopelessly out of the loop. For those of us who do not always have reliable access to the internet, your magazine discs are extremely helpful and the live distros you supply have saved a glitchy Windows computer on more than one occasion. As long as the computing world and operating systems continue to evolve, supplying a disc with the magazine is a definite godsend for
Above Live boot Ubuntu from our issue 165 coverdisc or make your own live boot disc or flash drive and you can recover your partition and its data easily
Facebook:
Linux User & Developer
anyone wanting to tear themselves away from Microsoft, Google, and Apple. Shawn Peever Shawn, we’re so glad you like our disc, and you’ll be pleased to hear that we’ve now brought it back for good! While it’s easy to say that everyone can just pick up programs from the internet these days, you make the very good point that not everyone has the kind of reliable internet service and speeds that make it simple to download a distro ready to live boot. Patchy internet service can result in a corrupted download; burning that to disc and trying to live boot from it is the most common cause of a failed installation, so we’re happy to save you that. Plus, bringing back our disc gives us the chance to curate some of our favourite distros and FOSS into themed packages, like the Ultimate FOSS Toolkit in issue 166, which contained our pick of the most creative free and open source software. We don’t want to give away too many secrets about what we’ll be including in future issues, but we’re keen to find out what readers’ favourite distros and FOSS are, so we’d love everyone to follow your example and drop us a line at linuxuser@imagine-publishing.co.uk.
Above Our cover disc is back for good and packed with distros and FOSS for you to explore
www.linuxuser.co.uk
95
FileSilo
YOUR FREE RESOURCES LOG IN TO WWW.FILESILO.CO.UK/LINUXUSER AND DOWNLOAD THE LATEST DISTROS AND FREE SOFTWARE TODAY
LATEST DISTROS
YOUR BONUS RESOURCES ON FILESILO THIS ISSUE, FREE FOR LINUX USER & DEVELOPER READERS, YOU’LL FIND THESE GREAT RESOURCES… » Ultimate security distro Kali Linux will help you hack-proof your PC » FOSS packages, including packet sniffers, vulnerability scanners and intrustion detectors » Watch The Linux Foundation, Red Hat and Raspberry Pi video guides, tutorials and webinars » Code and assets for this issue’s tutorials, including scripts for visualising music in Minecraft with the Raspberry Pi
20HOURS
LENGTH OF VIDEO TUTORIALS:
TOP LINUX FOSS
TUTORIAL CODE & ASSETS
www.filesilo.co.uk/linuxuser 96
FILESILO – THE HOME OF PRO RESOURCES DISCOVER YOUR FREE ONLINE ASSETS A rapidly growing library Updated continually with cool resources Lets you keep your downloads organised Browse and access your content from anywhere No more torn disc pages to ruin your magazines
No more broken discs Print subscribers get all the content Digital magazine owners get all the content too! Each issue’s content is free with your magazine Secure online access to your free resources This is the FileSilo site that replaces your disc. You’ll find it by visiting the link on the following page. The first time you use FileSilo you’ll need to register. After that, you can use the email address and password you provided to log in.
The most popular downloads are shown in this carousel, so see what your fellow readers are enjoying!
If you’re looking for a particular type of content like distros or Python files, use these filters to refine your search.
Green open padlocks show the issues you have accessed. Red closed padlocks show the ones you need to buy or unlock. Top Downloads are listed here, so you can get an instant look at the most popular downloaded content. Check out the Highest Rated list to see the resources that other readers have voted for as the best!
Find out more about our online stores, and useful FAQs like our cookie and privacy policies and contact details.
Discover our amazing sister magazines and the wealth of content and information that they provide.
www.linuxuser.co.uk
97
FileSilo
HOW TO USE
EVERYTHING YOU NEED TO KNOW ABOUT ACCESSING YOUR NEW DIGITAL REPOSITORY
To access FileSilo, please visit www.filesilo.co.uk/linuxuser
01
Follow the instructions on-screen to create an account with our secure FileSilo system, then log in and unlock the issue by answering a simple question about the magazine. You can access the content for free with your issue.
02
If youâ&#x20AC;&#x2122;re a print subscriber, you can easily unlock all the content by entering your unique Web ID. Your Web ID is the eight-digit alphanumeric code printed above your address details on the mailing label of your subscription copies. It can also be found on your renewal letters.
03
You can access FileSilo on any desktop, tablet or smartphone device using any popular browser (such as Firefox, Chrome or Safari). However, we recommend that you use a desktop to download content, as you may not be able to download files to your phone or tablet.
04
If you have any problems with accessing content on FileSilo, or with the registration process, take a look at the FAQs online or email filesilohelp@ imagine-publishing.co.uk
MORE TUTORIALS AND INSPIRATION
Finished reading this issue? Thereâ&#x20AC;&#x2122;s plenty more free and open source goodness waiting for you on the Linux User & Developer website. Features, tutorials, reviews, opinion pieces and the best open source news are uploaded on a daily basis, covering Linux kernel development, the hottest new distros and FOSS, Raspberry Pi projects and interviews, programming guides and more. Join our burgeoning community of Linux users and developers and discover new Linux tools today.
www.linuxuser.co.uk Issue 168 of 98
is on sale 28 July 2016 from GreatDigitalMags.com
Linux Server Hosting from UK Specialists
24/7 UK Support • ISO 27001 Certified • Free Migrations
Managed Hosting • Cloud Hosting • Dedicated Servers
Supreme Hosting. Supreme Support.
www.CWCS.co.uk