Lf8fhjfgjddfg578

Page 1

REVIEWED: ZOTAC NEN STEAM MACHINE Pages of tutorials and features

62

Build an Ubuntu tablet Stream it with a Pi Zero Get into the Linux Terminal Full Raspberry Pi 3 details with exclusive team interviews

Get into Linux today!

POWER UP! Embed Linux and make all your devices fly! Build streaming media sticks Get ultra-fast boot times Explore self-driving cars Control flying drones

Going Rusty

Their attitude is that it’s the programmer’s responsibility to avoid undefined behaviour. Jim Blandy on everything wrong with C++

Roundup

Octave 4.0

When the Tux hits the fan reach for the best rescue tools

Dump costly Matlab for the open source alternative

Rescue distros

Big Data solved

BBC Micro:bit

Help your children get coding with the all-new BBC gadget that’s in schools now!



Welcome Get into Linux today!

What we do

We support the open source community by providing a resource of information, and a forum for debate. We help all readers get more from Linux with our tutorials section – we’ve something for everyone! We license all the source code we print in our tutorials section under the GNU GPL v3. We give you the most accurate, unbiased and up-to-date information on all things Linux.

Who we are

This issue we asked our experts: What’s the best use of embedded Linux you’ve come across?

Jonni Bidwell The US Navy’s new Zumwalt class destroyers features 16 blade servers and will run a proprietary, Linux-based OS called LynxOS. One day it will control railguns and lasers. And in other reassuring news the UK’s entire nuclear deterrent is still running on Windows XP.

Neil Bothwick Linux is everywhere now so it’s hard to pick a single use case. But the porting of Linux to the Cyberdyne range of CPUs and it being embedded in the T-800 has to have been/will be a significant step. Although attempts to enforce the GPL are awaiting the court’s decision on judgement day.

Nick Peers For me, it has to be Pi MusicBox. It’s so simple to set up and designed to be run headless and made it incredibly easy for me to set up a Pi Zero as a cheap streaming music stick. It’s great when you consider something like a Sonos system would set you back hundreds of pounds.

Les Pounder I remember back in 2004 getting rather excited about the Sharp Zaurus range of devices. No WIFI or Bluetooth and only 4GB of storage, but this device was revolutionary. By today’s standards it is a pocket calculator, but at the time it was a brilliant use of Linux.

Mayank Sharma That honour goes to the Motorola A780. Although it strangely had regular GNU utilities, like glibc and fileutils, this was the first time I could get a root shell on a phone. I still have fond memories of telnetting into the clamshell and following Harald Welte’s exploits with the phone.

Linux inside Quad-copters! Let’s distract management with the shiny thing, so us grown-ups can sit down and talk about embedded Linux. In this issue there’s not only an entire cover feature on the subject, but also a host of tutorials on the idea of running Linux in embedded systems for speed, fun and entertainment. From tiny Linux systems that can boot in under a second to running Linux on streamlined tablet hardware, Linux is a versatile beast of a penguin. This opens up whole new areas for us to play with and have fun with. Once again we’ve unleashed Jonni Bidwell, our tamed in-house hacker, to turn Linux to a host of interesting tasks: from creating optimised builds of Debian 8, so he can build custom routers, to piloting quadcopters from a Linux ground control. We also delve into how to get Ubuntu running smoothly on one of the more popular low-cost, Intel Atom-based x86 tablets that are becoming widely available. As well as this, we cover the long-running OpenELEC; the embedded version of Kodi for solid entertainment streaming on almost any hardware you like. We also play with the tiny Pi Zero to make a streaming stick. But it can’t be all fun. We’re still staying somewhat serious with an excellent look at the updated Octave 4.0 so you can chow down on some Big Data. We continue to look at why Apple loves open-source Swift so much with a cool project to try and finish looking at MonoDB by creating a website. Finally for the Pi lover – the computer is four-years old! Using any excuse for birthday cake, we look back at how the Raspberry Pi sprung into life and managed to capture not just the UK’s but the entire world’s imagination. It’s become the biggest-selling UK computer of all time, with no sign of slowing and, of course, it’s an amazing ambassador for Linux too.

Neil Mohr Editor neil.mohr@futurenet.com

Subscribe & save!

On digital and print, see p30 www.techradar.com/pro

April 2016 LXF209 3


Contents

Full details Raspberry Pi 3 on p6

“If Microsoft ever does applications for Linux it means I’ve won.” – Linux Torvalds

Reviews Toshiba Chromebook 2 ... 17 Toshiba was one of the first big-names to ship a quality Chromebook and now it’s taking that winning formula and upping the hardware specs for 2016.

Want an Intel Core i3 processor and a 1080p screen? You got it.

Zotac NEN Steam PC .......18

POWER UP! It’s time to embed Linux in everything from quadcopters to cars, discover how on page 32.

Roundup: Rescue distros p24

We were promised Steam Machines and finally they are arriving. We take a cool box from Zotac for a gaming spin and see what Steam OS can do for you.

Rosa KDE R7......................19 A distro aimed at enthusiasts and experienced users. It’s perfect for those who are fascinated with KDE’s tweakability and are sufficiently proficient with Linux.

exGENT ............................. 20

Shashank Sharma tries out another Gentoo based distribution, determined not to let the love for its ancestry cloud his judgement, but he readies the big ten score card…

Designed for newbies wanting to learn Gentoo – but does it succeed?

Nelum OS...........................21 What do you get when an Ubuntu derivative, a minimalist distro and a Debian-based rolling distro walk into a bar? This!

XCOM 2 ............................. 22 We told you the alien menace was real. Run for the hills! Again!

Interview It’s a sword dance on an ice skating rink. Rust exists to bridge that chasm. On the instability of traditional system languages p40

4 LXF209 April 2016

www.linuxformat.com


On your FREE DVD OpenELEC for PC & Pi, Debian 8.3 LXF 64-bit, Rescatux 32- & 64-bit Only the best distros every month PLUS: Ubuntu for tablets & more!

p96

Raspberry Pi Userr

Subscribe b & save! p30 In-depth...

The BBC Micro:bit ............... 44

Raspberry Pi is 4!................. 58

The next big thing in schools will help transform your children’s coding skills. It’s from the BBC, it works with the Pi and it’s free*! (*for schools :o)

We look back over four years of the Raspberry Pi to see how it became everyone’s best-selling micro-PC board and ruled the world!

Kano screen kit .................... 63 In his quest to create the ultimate Raspberry Pi portable hack station, Les Pounder finds a portable screen that will fit in his rucksack.

Lists in Scratch..................... 64 Les Pounder shows you how programming concepts can be taught using Scratch and then applied in Python – this time for lists.

It’s time to do more with your Micro:bit.

Coding Academy

Tutorials Terminal basics Hello terminal ................... 68

Swift projects ....................... 84 Paul Hudson is back and he wants to teach you everything he has learnt while he’s been away, and that’s Swift and lots of it!

Nick Peers kicks off a new series on how you can use the terminal to do amazing things and save time too.

MongoDB website................ 88

Embedded hardware Ubuntu tablets .................. 70

Mihalis Tsoukalos finishes his time with MongoDB by implementing a website using this excellent NoSQL database.

Nick Peers has a low-cost x86 tablet and wonders if Ubuntu will run on it? Of course it will and he’s here to show you how.

Regulars at a glance News............................. 6 Subscriptions ...........30 Back issues ...............66 We try out the amazing new

Subscribe and save. Let the postie

Don’t let the hackers win, brush up

Raspberry Pi 3, Vulkan promises

deliver it straight to your door and

on their secrets in LXF208.

faster Linux 3D. Kodi goes ape and

never set foot in a newsagent again.

Next month ...............98

the Linux Foundation get flak for doing its job.

Sysadmin...................48

With KDE 5 Plasma desktop coming

Mr. Brown hasn’t been replaced by AI

along nicely, we take a look at getting

Mailserver................... 11

quite yet and gets on with herding

the best Linux desktop possible.

No mention of 32-bit distros this

more Rancher containers while

From low-end to the high.

month, we’re sad, but plenty on the

celebrating AMD’s greater openness.

HotPicks .................... 52 Alexander Tolstoy isn’t flying

Les Pounder decides to read the

bombing runs over Syria he’s far too

dictionary, beginning at ‘c’.

busy buzzing over FLOSS silos

Moksha, Knemo, WebcamStudio,

Mayank Sharma is your hero with a

KeePassX, FET, Opus, Widelands,

sack full of rescue distros.

Duckhunt-JS, Retext, ABCDE.

Neil Bothwick fends off accusations we only write Ubuntu tutorials with Gentoo.

Afnan Rehman explains how the latest release of Octave really takes on Matlab.

OpenELEC Pi Zero streaming stick .. 80

looking for gems like: MyPaint,

Roundup ....................24

Gentoo Portage in-depth .............. 74 Maths! Octave 4.0 ......................... 76

Linux desktop and GPL software.

User groups................15

Get Ubuntu on a Windows tablet.

Our subscription team is waiting for your call.

www.techradar.com/pro

Nick Peers has a dream to build the best media centre ever, has he succeeded?

April 2016 LXF209 5


THIS ISSUE: Raspberry Pi 3

Vulkan 1.0

Linux Foundation

Kodi pirates

PI NEWS

Raspberry Pi 3 arrives And it’s billed as Internet of Things ready.

F

our years to the day since the original Pi was released, Raspberry Pi Trading, in conjunction with element14, has announced a new and exciting addition to the Pi family. The new model sports a powerful 64-bit ARM Cortex-A53 quadcore CPU clocked at 1.2GHz, making it the fastest Pi yet. But this isn’t the only new feature – the Pi 3 is billing itself as an IoT-ready device, sporting onboard 802.11b/g/n wireless and Bluetooth 4.1. IoT notwithstanding, potential users will be pleased that wireless connectivity no longer requires sacrificing a USB port. It’s not all change though: The credit-card-sized form factor has been retained, and the board layout is largely unchanged from the Pi 2, save for the LEDs which have been relocated south of the SD card to accommodate the antenna. The microSD slot no longer uses a spring-loaded retention mechanism, instead using friction, so that’s one less thing to break or knock out. Pi 3 uses the same (now open) Broadcom VideoCore IV graphics as its predecessor, but this is now clocked at

400MHz, with the 3D core itself running at 300MHz, both up from 250MHz. There’s still 1GB of RAM onboard and, perhaps most importantly, the pricing remains unchanged at £30. A cheaper, Model A Pi 3, lacking the Ethernet port, is expected later in the year, as is a new Compute Module. Although the CPU is 64-bit, for the time being the Raspbian userland will remain 32-bit, maintaining compatibility across models. This will be re-evaluated in future: there will be applications which will benefit from AArch64, but for the most part the A53 is the spiritual successor to the A7, even if only in 32-bit mode. Besides the 33% increase in clockspeed, improvements within the BCM2387 mean that some workloads will see further speed boosts. The Pi 2 was touted as being about six times faster than its predecessor (the B+) and this time around the Foundation tells us the new board is ten times

The new model has a 64-bit ARM Cortex-A53 quad-core CPU clocked at 1.2GHz, which makes it the fastest Pi yet.

faster than the B+, hence about 66% faster than the Pi 2. A cursory test using the Whetstone benchmark came close to this, showing a 65% speed boost over the 900MHz Pi 2. The new CPU is capable of frequency scaling, so will downclock itself to 600MHz when the system is idle. At this speed, power consumption drops to a meagre 2.5W,

“Eight million Pis have been shipped, making it the UK’s best-selling PC.”

The creditcard sized form factor remains but the Pi 3 now has 802.11b/g/n wireless and Bluetooth 4.1.

6 LXF209 April 2016

www.linuxformat.com

which will be a boon to those using the Pi in power-constrained circumstances. Wireless and Bluetooth is provided through the BCM43438 comboconnectivity device, which supports both Bluetooth Classic and Bluetooth Low Energy modes. Drivers for these will have been added to Raspbian by the time you read this, so everything should work out of the box. Since its launch, eight million Pis have been shipped, making it the UK’s best-selling computer. At this rate this number threatens the Commodore 64 – the highest selling machine of all time – which sold upwards of 17 million (nobody really kept track) units.


Newsdesk GRAPHICS

Vulkan 1.0 paves way for next generation Finally, a Graphic API to get excited about.

I

t looks highly likely Ubuntu 16.04 will come with support for the Vulkan graphics API, which is a low-overhead API that can allow even relatively underpowered devices run impressive-looking games. Vulkan is also multiplatform, so unlike other APIs, such as DirectX or Metal (which are locked to Microsoft and Apple platforms respectively), or Mantle (which only works with AMD hardware), Vulkan can be used no matter what OS or hardware. But perhaps best of all it’s royalty-free and open source. With the release of the Vulkan 1.0 specification by Khronos, the company behind the API, we’ve seen examples of what the technology can achieve – and the results are already impressive. Gabe Newell, co-founder and managing director of Valve, is already singing Vulkan’s praises: “We are extremely pleased at the industry’s rapid execution on the Vulkan API initiative. Due to Vulkan’s cross-platform availability, high performance and healthy open source ecosystem, we expect to see rapid uptake by software developers, far exceeding the adoption of similar APIs which are limited to specific operating systems”. Representatives from both AMD and Nvidia – two rivals that rarely agree on

anything – have also come out in support for Vulkan. If you want to find out more about Vulkan – as well as try out demos and samples of the API, head over to www.khronos.org/vulkan.

As the logo states the industry has gotten behind the Vulkan API, which could mean very exciting things for the future of games.

The Linux Foundation drops community representation And conspiracies abound…

T

The makers of Kodi, the popular open-source home theatre software, has hit out at piracy box sellers who it says are deliberately modifying and selling versions of Kodi that are purposefully broken, allowing users to run dodgy add-ons from untrustworthy repos. While these addons allow Kodi users to access pirated material, they also frequently break Kodi, as well as open it up to serious security exploits. In a passionate blog post titled The Piracy Box Sellers and YouTube Promoters Are Killing Kodi (http:// bit.ly/KodiPiracyBoxSellers), it’s revealed that the situation has gotten so bad that core Kodi developers have threatened to quit. So what are they doing to help prevent this? Kodi now own the trademark, and are threatening legal action against people selling Kodi boxes made for piracy.

Kodi’s developers are not happy with pirates. Not happy at all.

COMMUNITY

he Linux Foundation (LF) does admirable work in promoting, protecting and standardising Linux and open source software, and until recently it was possible for individual members to elect two board members to represent the community. However, the LF by-laws have been changed so this no longer happens. The effect of this change has caused many people to worry that the community’s influence on the LF is waning. Some community figures have gone even further, such as Matthew Garrett, a security developer and member of the Free Software Foundation board of directors, who in a blog (https://mjg59.dreamwidth. org/39546.html) links Karen Sandler’s – the executive director of the Software Freedom

Newsbytes

Conservancy (SFC) – announcement that she planned to stand for election to the Linux Foundation board, with the timing of the by-law change. The SFC is involved in GPL enforcement, which has seen it funding a lawsuit against one of the Foundation’s members for GPL violation. However, other prominent community figures aren’t quite on board with the conspiracy. Greg Kroah-Hartman, a member of the Linux Foundation took to Reddit (http://bit.ly/ GregKHOnByLawChange) to defend the changes. He points out, among other things, that the change just brings the Foundation’s charter in-line with other non-profit structures. We have a feeling this could rumble on for a while…

www.techradar.com/pro

Wikipedia has received a $250,000 grant to develop a Knowledge Engine, a system for “discovering reliable and trustworthy public information on the Internet”, which to many people sounds a lot like a search engine. However, Wikipedia says that “we’re not building Google. We are improving the existing CirrusSearch infrastructure with better relevance, multi-language, multi-projects search and incorporating new data sources for our projects…” Focusing on just searching Wikipedia’s huge collection of articles, rather than the entire internet might be a wise idea after Wikipedia’s last attempt at challenging Google, Wikia, shut down after a year. While we’ve been hearing positive stories of governments and public services embracing free and open source software, the French Ministry of Education has taken a step backwards by signing a software licence agreement with Microsoft France. The terms of the agreement are causing controversy with some free software advocacy groups stating that competitive bidding from other companies was ignored, and that Microsoft is willing to sell at a loss just to keep a monopolistic position. The Conseil National du Logiciel Libre is considering what steps it can take to challenge or appeal this decision.

April 2016 LXF209 7


1HZVGHVN &RPPHQW

$&5 *-+8$ = 5 =$&4) - -+ 4 + 8$ 8 8 ( 5 .) &+ 4;55 )5 < 4? ? 4 . ( =&8$ 8$ 4&#$8 (&+ - . -.) 8- # 8 8$&+#5 -+ &&+ 4 5- 8= 4 1 $ & 4 & 8 * $ $ (% 58 -4 $ + 8- # 8 &+8- 8$ *-- 5. +8 )&88) =$&) 4 %) 4+&+# 8$ /- <&-;50 + 55&8? - .4- &)&+# -4 -.8&*&5&+# 5. + &+# $;+( - -48 84?&+# 8- 5 < * *-4? -4 -;4 +)&+ < 45&-+ =&8$-;8 =&++&+# *; $ -4 .4- &)&+# + 2;& ()? $ )<&+# -;4 * *-4? ;5 =&8$ ';58 = )&+ 51 8 = 5 #4 8 8-- 8- ++-;+ +-8$ 4 -)) -4 +)&+ . 48+ 45$&. =&8$ 8$ .-.;) 4 -) #4-;.= 4 5;&8 1 $ 8 =&)) 4&+# 4& $ -+)&+ - ;* +8 &8&+# &+8- -) -+

&+;> )-+#5& &85 * &) ) + 4 4 55 --( &) 58-4 # + -8$ 4 .);## ) 4& $+ 551 8 = 5 #-- 8 8 $ ;. =&8$ 8$ 8 * 8$ 4 1

$ 5-* #4 8 &++-< 8&-+5 -+ 5$-= 4-* 9 .4&+8&+# 8- 8$ ) 8 58 &+ -*.&) 4 )#-4&8$*51 + $ ( )-< = 5 ;5&+# *- 4+ ;@@ 8 58 4 =$& $ -))-=5 8$ - > ;8&-+ + ;&) 5 + = &+.;8 .8 8- &*.4-< - -< 4 # &+ + = = ?1 -) + * 4 4-* 8 $--( 8$&5 ;. 8- 4&< 8$ ;5 4 &+8 4 & 4 & 1 $ ;@@&+# $ ;4&58& 5 8$ 4 4&+# < + *-4 + &8 8$ + *&))&-+ *-+( ?5 8?.&+# 4 + -*)? 4 .& )? &+ &+# 5 < 4 ) &+8 4 58&+# -4+ 4% 5 5 8- &>1 8 = 5 + -;4 #&+# 8-- 8- * 8 5 < 4 ) * * 45 - 8$ )&.5 8 * + 5= . 58-4& 5 - $-= &8 &5 * + #&+# 84 +-< 8 + &*.4-< -8$ &85 -**;+&8? + &85 - % 5 ;5&+# 5&*&) 4 8 $+&2; 5 + 8 8& 5 8 & 4 & 1 835 #-- 8- &+ (&+ 4 5.&4&85 =-4(&+# -+ +-8$ 4 $;# + -*.) > - 5 =$&) 84?&+# 8 88) 8$ * 55 -4 5 8$ 8 &5 8$ &+;> 5(8-. 58 (1 8 4 5-* ?5 - ) 8 +&#$85 8$ 4 = 5 8$ &458 - 4 * 8&+# - 8$ + = - ;* +8 -;+ 8&-+ - 4 =$& $ #-8 < 4?-+ )&#+ -4 8$ + >8 8=- ? 4 8 4* &835 > &8&+# 8&* 5 8- &+<-)< =&8$ 4 5- 8= 4 1 !

/;)

"

7,1< &25( /,18; ? 8$ 8&* ?-; 4 8$&5 &+? -4 &+;> 61A =&)) 4 ? 8 -=+)- 1 + 8$&5 4 ) 5 8$

&+;> ( 4+ ) $ 5 + ;. 8 8- "1:1, )-+# =&8$ 8$ ) 8 58 58 ) . 8 $ + ;. 8 5 8#)& /8- !1:1A01 -4#%616 + 1

$ *&+&* )&58 &+;> &584- =$& $ 4;+5 +8&4 )? &+ * *-4? + #&< 5 &85 ;5 45 *- ;) 4 55 8-+)? 4;+ 8$ 8--)5 + 5 4<& 5 8$ ? + $ 5 + #&< + 5$-48 ;8 -+ &5 4 ) 5 +-8 8- * 8 $ =$& $ ?-; + 4 -+

1

5($&726 8 4 + 4)? 8 + ? 45 5&+ 8$ 4 ) 5 - 8 A191A = 3< +-= #-8 + = * '-4 ;. 8 - 8$ -. 4 8&+# 5?58 * 8$ 835 5 -+ 8$ 5&#+ - &+ -=5 1 -- 8$&+#5 4 5 & 8 -* 8- 8$-5 =$- = &8 + A1"1A 48 &+)? 4&+#5 .4 88? -

#-- 8$&+#5 =&8$ +;* 4 5&#+& & +8 &*.4-< * +85 &+ ); &+# + =&4 ) 55 + 8=-4(&+# 5;..-48 )-+# =&8$ 5;..-48 -4 &48; ) -> + &48; ) 5 = ))1 -4 8$ -*.4 $ +5&< )&58 - ;. 8 5 -4 8$&5 + = < 45&-+ $ -< 4 8 1



THE PERFECT PACKAGE FOR ANY PI OUT NOW! WITH FREE DIGITAL EDITION

ORDER YOUR COPY TODAY! Order online at http://bit.ly/raspberrypihandbook Also available in all good newsagents


Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath BA1 1UA or lxf.letters@futurenet.com.

A long time ago... A long time ago, I brought the very first issue of Linux Format and after a lot of struggling and frustration managed to get the system to work, but it was so much trouble I finally gave up and went back to Windows. From time to time I have tried again with the same outcome and today I brought LXF205 as the cover said that Ubuntu was easy to install… But not if you want to put it on an existing partition of your hard drive. I was able to identify the partition only by the amount of used and unused space. I pressed ‘Install’ and was told I needed a filesystem as the one I had chosen was not correct, so I picked another. I was then told there was still something else needed, then I was told I had no swap space but given no instructions on to how to add

A spot of sysadmin

Windows 10 is perfect, said no one ever.

this. So I will drop the magazine in the bin and go back to windows as at least – and despite its many faults – it actually works. Perhaps one day Linux will evolve enough for ordinary people to understand it, but until then it will remain as I see it, as the domain of geeks. Peter Hollings, via email

Neil says: Linux isn’t Windows. It’s a common mistake, but for a level playing field have you tried installing Windows alongside another OS? Probably not. We doubt that you would have a smooth time with that and indeed you can have as many problems installing Windows as Linux, if that’s how you want to rate an OS. Enjoy Windows!

This is my first letter after reading Linux Format for about two years. I really like your articles especially those related to server admin. I’d recommend having more articles related to making scalable services with Linux, such as web servers or databases using Apache, Nginx, MySQL or MongoDB. As well as performance tuning, devOps and Continuous Integration, which are still handy for sysadmins. To me, scalable means that when one Linux server is full you can add more boxes and continue serving the requests to the service. I think it’s very interesting that we can build a scalable and high availability service all from free software. I don’t know which Linux distribution you prefer to use, but I think the concepts are the same no matter.

Letter of the month

Keep on singing

S

ome time ago, you published an article on how to make karaoke files that were CDG and OGG or MP3. I’ve been playing around with a Raspberry Pi and found that there were people out there explaining how to turn it into a streaming center using SFTP. This got me thinking about how to mount the remote drive from the Raspberry Pi, which has an extensive collection of karaoke that I have made myself. Using just a Raspberry Pi with my drive set up as a NAS drive, I’ve installed PyKaraoke and SSHFS on my client (Mint Debian though this should work on any distro) I’ve created a solution that works. Mounting the drive into my home folder using the following commands from the terminal in my home folder:

mkdir karaoke sshfs pi@192.168.0.101:/ /home/john/ karaoke Then telling PyKaraoke to look in /home/john/karaoke/ path/to/karaoke/folder for all the files. Click ‘Scan Now’ and go to the pub. When you come back, click ‘Save.’. This is something that a karaoke bar could use when they have a karaoke jockey (meaning that the licence cost falls on the bar as they are storing the files; the files don’t get saved onto the jockey’s machine) This also has the benefit of being able to connect to my drive without having to use SFTP each time. This also allows me to store my photos and other documents and save them to my remote drive really easily. John Dwyer, via email

www.techradar.com/pro

Who thought the Raspberry Pi could turn you into a rock star!

Neil says: What a brilliant idea and such a simple solution, really. This is why we love the amazing Raspberry Pi so much. This brilliant little computer enables Linux to be deployed as a solution for thousands of problems that otherwise would be impossible or far harder and far more expensive to solve.

April 2016 LXF209 11


Mailserver

Virtualised systems, such as Docker, make it easier to quickly scale internet applications.

Another area is best practice within those services, in a real world situation. I believe this will help most of your readers another step of the way along. Sorawong D, Thailand Neil says: We’re glad you’re enjoying the magazine. We certainly try and balance the tutorials between people trying to pick up Linux and those wanting to improve their sysadmin skills. Thankfully our in-house sysadmin expert Jolyon has been doing an expert job of covering Docker containers and ELK stacks and the like. We’ll pass on your suggestions!

Charting choices

ASCII Star Wars, the only reason the ASCII standard was invented.

[p26, LXF193], which ranked the best as Inkscape, Xara Xtreme, LibreOffice Draw, sK1 and Karbon.

ACSII wars Many thanks for this beautiful magazine! A few hours ago I was playing around with some old ASCII tools, searched the net for examples and more tools. My eyes came across a very interesting URL containing a telnet session to an ASCII version of Star Wars: Episode IV! Using your favourite terminal emulator try this server: telnet towel.blinkenlights.nl – and prepare to be amazed! Cristian Schilder, via email Neil says: Thanks for the kind words, Effy’s a design miracle. We’re well aware of the Star Wars ASCII art, it’s something to behold, especially with Episode VII now out, it’s an easy way to catch up. Linux Format is helping again!

Windows again

Hopefully LibreDraw will be updated enough to challenge Microsoft Visio for charting.

I’ve just finished reading Kevin’s comments on Windows 10 and his loss of faith in Ubuntu. I’m a strong long-term user of Linux passing through many forms over the years, and currently with Linux Mint 17. Its attraction mostly stems from its feel for user ease of use, security and

shane_collinge@yahoo.com

G’Day, It feels sacrilegious to mention the name of a Microsoft product in a message to my favourite magazine, but my search for an alternative to Visio hasn’t been very successful so far. How about running a Roundup on Linux software packages that provide the same functionality as Microsoft’s Visio drawing software? I haven’t found something that compares. Dia Diagram Editor, yEd and Pencil are all alternatives. Please don’t suggest LibreDraw. I’m need in

need of something a little more robust. I want my team at LXF to do what they do best: Comb the catacombs, OK, stick to the rectory so the software is still being supported and provide some selections, but do some comparisons and provide me the insight I need to so I can ween myself of Visio. I know it is a little late for Christmas but can Santa’s Elves at LXF take on the task of rounding up some drawing programs and making a go at providing some alternatives. I can then add to my articles, Made in Linux. Sean D Conway, Canada Neil says: You could always run Visio through Wine? Still a terrible thing to do. There’s nothing that really duplicates all the features of Visio but it sounds like you just want the diagramming parts. Honestly, LibreDraw 5.1 isn’t that bad! There’s Dia Diagram Editor (http://dia-installer.de) and StarUML (http://staruml.io), but neither looks that great. You could try www.calligra.org/flow. Or else if you’re happy to draw things from scratch InkScape is excellent? We did run a vector drawing Roundup a while back

12 LXF209 April 2016

www.linuxformat.com

free will; with all the programming abilities still all available when needed. I maintain a copy of Windows for Tax time, etc. Yes, I have written to Oz ATO asking for Linux apps, they are moving to online apps now, however. I noted when I updated Windows 10 on my dual-boot laptop that you really need to clean up the corporate info grab, which is substantial and intrusive. Not to mention a hog on hardware and router speeds. So even with Windows 10 improvements, I still feel more secure and more my own man on board the Linux cruise than strong-armed into the Windows corporate heist. It’s there when I have no other choice but than to join the social queue. Steve O’Donnell, NSW Australia Neil says: It turns out that our Escape Windows 10 issue [LXF207] was one of the most popular in recent times. There’s always a bit of guesswork with sales figures, but it truly could be that Microsoft has pushed things too far with its enforced updates and its ever-watching status over Windows 10 users. In my short experience of Windows 10, I found it to be a slick OS once installed, but there’s a long install process.


Mailserver Its diagnostic self-repair is next to useless and I found that the forced updates seems to break things on a regular basis.

Stream it As a long-time reader (here in the USA), I found the writer’s statements in your main Stream It! feature [p32 , LXF204] to be not as informative or as complete as they could be. The writing seems be a targeted piece to deliver information that works and/or works with your advertisers or firms that you work with. Then I can also say that these firms also all seem to be only EU and not any American or World enterprises. I speak from being a system integrator for more than twenty (20) years and I’ve been following Linux and media servers for almost 10 years. I find the writer’s statement: "At time of writing, TVheadend.org is the only major Linux server that’s supported, so follow the main text to install and configure it alongside Emby" to be

somewhat inaccurate. In fact, prior to this article, I hadn’t heard of this product/project, if it had not been for Linux Format. I know that there are other alternatives to Emby, which are even older in their development than this project, for instance, MythTV. You can find several YouTube videos and blog/article postings showing how to use MythTV’s Server function along with Kodi; let alone the Linux TV tuners that work with MythTV that were working with MythTV before Kodi existed. In fact you will also find this type of information on Kodi’s site, too. I would like to ask if this main article was approved knowing the slant the article presents. If I can pick-up on how this article seems to be limited in its’ view and that view was not really worldly. Then I would anticipate that there are more readers like myself. Harvey Rothenberg, USA Neil says: I’m not sure what you mean by advertisers, as we have very few of those and none of them are connected with streaming anything. That aside the answer is yes, the article was deliberately aimed at just Emby. In past issues we’ve looked at Plex and MythTV, so we wanted to cover a new option. Frankly, MythTV has always been something of a trial to install and in this age of on demand TV services, its usefulness is questionable. But I concede your point that when it works MythTV works wonderfully with TV tuners. Perhaps we should revisit it.

Digital pricing It seems a lot of you want to escape Windows 10.

I have a lot of subscriptions to digital magazines through Zinio.

We looked at MythTV back in LXF159 and it almost killed the reviews editor at the time.

I’ve wanted to make a subscription to Linux Format magazine for a long time, but the price for the digital edition is higher than any other PC magazine I’ve seen. I’ve also waited a long time for a discount or sale, but I haven’t seen one yet. I really think you should consider your digital price as your magazine is $59, while other PC magazines are always on sale and their price is between $8-$40. I am sure that price is pushing away many potential customers, and I thought you should know that. Dotan Porat, via email Neil says: Thanks for your mail and your concerns. Let me try and explain the issues with pricing. Linux Format started life as a print-only magazine, and an expensive one too – it was £6.49 back in 2003 (and 13 years on we’re still £6.49). Inflation? Pah! For a print title you buy from a shop the high price seems reasonable, right? You’re paying for dead trees, oil-based ink,

transport [fuel for our squadron of Lancaster Bombers is expensive] and the nice person to stock and sell you the copy. All on top of paying for the team, keeping LXF Tower’s lights on and paying the local Sheriff-of-Bath’s taxes. So surely for digital we avoid the first half of all those costs right? No, Zinio, Google, Apple et al all demand around a 30% cut of the sale price. We sell LXF online for £4.99 take off the digital distributor’s pound of flesh and we’re back where we started, in terms of cost. As for deals, the best offers to be had are always going to be at www.myfavouritemagazines. co.uk as we run that subscription store and have more control over the pricing. We had stonking deals over Christmas 2015 and through the January 2016 sales. You also retain access to the free PDF archive that goes all the way back to LXF66, even if you take out the digital-only subscription. Keep an eye on the Facebook and Twitter feed for offers. LXF

Write to us Do you have a burning Linuxrelated issue you want to discuss? Want to let us know about any more interesting ASCII art projects that you stumbled upon or just want to suggest future content? Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath, BA1 1UA or lxf.letters@futurenet.com.

www.techradar.com/pro

April 2016 LXF209 13


BECOME AN EXPERT CODER THE EASY WAY OUT NOW! WITH

FREE DIGITAL EDITION

ORDER YOUR COPY TODAY! Order online at

http://bit.ly/mfm_coding-made-simple Also available in all good newsagents


Linux user groups

United Linux!

The intrepid Les Pounder brings you the latest community and LUG news.

Blackpool Makerspace Blackpool Makerspace, 64 Tyldesley Road, 10am every Saturday. https://blackpoolmakerspace.wordpress.com Bristol and Bath LUG On the fourth Saturday of each month at the Knights Templar at 12.30pm. www.bristol.lug.org.uk

Coding Evening Help teachers learn computing and have a beer. Locations across the UK. www.codingevening.org Egham Raspberry Jam Meet at Gartner UK HQ every quarter. http://winkleink.blogspot.co.uk

Lincoln LUG Meet on the third Wednesday of the month at 7.00pm, Lincoln Bowl, Washingborough Road, Lincoln, LN4 1EF. www.lincoln.lug.org.uk Liverpool LUG Meet on the first Wednesday of the month from 7pm onwards at DoES Liverpool, Gostins Building, Hanover Street, Liverpool. http://liv.lug.org.uk/wiki Manchester Hackspace Open night every Wednesday at 42 Edge St, in the Northern Quarter. http://hacman.org.uk Surrey & Hampshire Hackspace Meet weekly each Thursday from 6.30pm at Games Galaxy in Farnborough. www.sh-hackspace.org.uk Tyneside LUG From 12pm, first Saturday of the month at the Discovery Museum, Newcastle. www.tyneside.lug.org.uk

It’s good to share Community – the best that FLOSS can get.

L

ook up ‘Community’ in the a big part in makerspaces: artists, dictionary and you’ll find that it’s musicians, hackers and makers all rub defined as “the condition of shoulders to work on projects and sharing or having certain attitudes and share skills. Is there an artist in need of interests in common.” These words some tech? Or could you be the one to explain what community is, but rather help them with a Pi or Arduino demo? coldly. Community is the greatest Helping others in the community is a achievement in FLOSS, where great way to make friends, share communities across the globe work to knowledge and keep your skills sharp. further their particular interests. Helping a community member with an Linux User Groups (LUGs) serve as MySQL/PHP issue might just lead to a great ambassadors for FLOSS. At a LUG new job or provide insight into a problem you can learn new skills and share skills that you may face one day. Pass on your with others. Do you remember your skills to younger members and help first time at a LUG meeting? I bet that them to shine in the new Computing you were a little apprehensive? Be the curriculum. Above all, be the best smiling face that welcomes newcomers community member that you can be and listens to their problems. Yes, we and others will take notice of us all. LXF know that you can probably fix that issue in a flash via the command line, but does the newcomer realise this and can you show them how to learn that skill? Makerspaces are similar to LUGs but have a broader range of interests. The Blackpool Makerspace, for instance, incorporates its LUG activity into its Community is an important part of sharing the schedule. Community plays knowledge and ideals of open source software.

Community events news

LinuxCon Europe Berlin, Germany plays host to LinuxCon Europe between October 4-6 2016. If you’re a sysadmin, architect or a passionate Linux developer then this event will be right up your

street. Over three days you’ll see panel discussions, workshops and deep dives (in-depth technical discussions) on all aspects of the Linux ecosystem. Ticket prices start at around $225 for students and reach the dizzy height of $950, but this can easily serve as part of a yearly training budget for those in the industry. LinuxCon Europe is a great event to rub shoulders with like-minded people and learn new skills. http://bit.ly/LinuxConEurope

Liverpool Makefest 2016 Last year’s event had a diverse range of maker activities from robots, drones to knitting – there was even a guest appearance of a Dalek and Doctor Who makeup effects! For 2016 the team are keen to invite more makers to run stalls, workshops and talk about their specialist area. This free event takes place on June 25 in Liverpool Central Library just over the road from Liverpool Lime Street station. http://lpoolmakefest.org

www.techradar.com/pro

Electromagnetic Field What do you get when you mix makers, tents and real ale? You get Electromagnetic Field (EMF), a weekend of camping, hacking and socialising. EMF takes place in Loseley Park, Guildford on 5-7 August and it’s not just electronics and computers: there’s also blacksmithing, biometrics and security. If you are a maker or hacker then this event is a must-attend, just don’t forget your tent and your wellies! www.emfcamp.org

April 2016 LXF209 15

Image credit : Les Pounder

Find and join a LUG


The home of technology techradar.com


All the latest software and hardware reviewed and rated by our experts

Toshiba CB 2 Enjoying Full HD on a Chromebook just got better thinks Kevin Lee. Specs OS: Chrome OS CPU: 2.1GHz Intel Core i3-5015U (dual-core, 3MB cache) Graphics: Intel HD Graphics 5500 RAM: 4 GB DDR3L (1,600MHz) Screen: 13.3inch, 1,920x1,090 Storage: 16GB eMMC Ports: 1 x USB 3.0, 1 x USB 2.0, HDMI, SD card slot, headphone and mic jack Comms: Intel Dual-Band Wireless-AC 7260, Bluetooth 4.0 Weight: 1.35kg Size: 32x21.3 x1.93cm

T

he Toshiba Chromebook 2 was one of the first models to really shake up the Chromebook format with a vibrant 1080p display, and now it’s back with an added Broadwell Core i3 processor and a new backlit keyboard. Unfortunately, the new premium components come with an inflated price tag too, putting it in the crosshairs of premium Chrome OS notebooks, like the Google Chromebook Pixel and Dell Chromebook 13. The refreshed Toshiba is largely the same as its Celeron-powered predecessor from 2014. In fact, it comes sporting nearly the same chassis with matching dimensions. While it looks the same from the outside, going with a Core i3 processor means Toshiba had to put a cooling fan in. The keyboard is more than serviceable, with a traditional layout but backlit keys – a rare feature for Chrome OS devices. Just below the keyboard, the trackpad is sizable and offers precision clicking. We had little to no performance issues with the Chromebook 2. The laptop ran swimmingly, with dozens of tabs open while we had Google Music and a YouTube video we forgot to close playing in the background. The new Core i3 5015U processor returned results of 21,554 in Octane and 1,535ms in Mozilla Kraken. Contrast these with the previous model running its Intel Celeron 3215U scoring it 16,921 in Octane and 1,976ms in Mozilla Kraken. Two years ago, the Chromebook 2 was the first Chrome OS laptop to come with a vibrant Full HD screen. Sure, 1080p screens have become more commonplace since then, but Toshiba still makes some of the best looking displays on any notebook. The Toshiba’s display not only has more pixels but also a higher quality

Toshiba chose a more powerful Core i3 processor and cool Full HD display.

screen in general. Rather than using a TN panel as most Chromebooks have, Toshiba has opted for a TFT screen, which renders vibrant colours and produces deep blacks.

Affordable upgrade

Of course, the downside of pushing so many more pixels and having a more powerful processor is shorter battery life. The Toshiba only lasted six hours and two minutes between switching through a dozen Chrome tabs, streaming Google Music, slipping into an hour of YouTube and performing some word processing work. This test was also done with the display just a tick under 50% screen brightness and the speakers set at roughly 20%. Compared to the original Toshiba Chromebook, the extra power draw of the Core i3 processor shortened battery life by 24 minutes. In a standard movie playback battery test the Chromebook 2 again called it quits after 6 hours and 2 minutes. The Celeron-powered model, meanwhile, had a battery life of 6 hours and 26 minutes. By comparison, the

www.techradar.com/pro

longest we were able to stretch the Pixel was 8 hours and 22 minutes, and the Dell Chromebook 13 is rated for up to 12 hours of use. The Toshiba Chromebook 2 is largely an improvement over the 2014 model in every way, with more power under the hood and a backlit keyboard. You’ll be paying a bit more for it, but it’s the most affordable option compared to the Pixel, aimed at developers, and the businessminded Dell Chromebook 13. LXF

Verdict Toshiba Chromebook 2 Developer: Toshiba Web: www.toshiba.co.uk Price: £370 (Core i3, 4GB)

Features Performance Ease of use Value

8/10 8/10 9/10 8/10

This updated, thin and light 13-inch laptop is the best, fully-loaded Chromebook for the average user.

Rating 8/10

April 2016 LXF209 17


Reviews Steam Machine

Zotac NEN SN970 Zotac’s latest Steam Machine is a good workman, but as Dave James discovers the tiny gaming box is let down by its tools. Specs OS: Steam OS CPU: Intel Core i5-6400T @2.2GHz (2.8GHz Turbo) RAM: 8GB single-channel DDR3 @ 1,600MHz HDD: 1TB HDD GPU: Nvidia GTX 960 Comms: Wi-Fi 802.11ac/b/g/n, Bluetooth 4.0, 2x Gigabit Ethernet Ports: 2x USB 2.0, 2x USB 3.0, 1x USB-C, 4x HDMI 2.0 Size: 210mmx 203mmx62.2mm Extras: Steam Controller

R

emember when we thought Steam Machines were a pretty neat idea? Remember when Valve was going to save us from the PC gaming hegemony, nay dictatorship, of the evil Microsoft? Yeah. Things haven’t really fallen out that way, have they? In fact the more time we spend gaming with dedicated Steam Machines, and their Linux-lite OS, the more glorious the homecoming when switching back to a full-fat Linux-based rig. It’s the same situation with Zotac’s latest Steam Machine, the bizarrelytitled NEN. Which is a damned shame because Zotac has created one of the most impressive mini-rigs we’ve ever tested, cramming full size gaming hardware into a pint-sized package. For a start the black and white, Stormtrooper-inspired, design is seriously pleasing. But within that diminutive exterior is a Skylake quadcore CPU, the 35W Core i5-6400T. Backing this impressively solid processor up is 8GB of DDR3 memory running at 1,600MHz, though the NEN’s memory system is made up of a single SODIMM, meaning it’s only running in single channel mode. Still, it offers a little upgrade potential in the future. Though there are far more immediate upgrades you’re going to need to engage in to get the most out of Zotac’s NEN… but we’ll come to that. The big feature of the NEN is it’s graphics power. Trapped inside the black and white vented plastic trim is a full desktop GTX 960 with enough

Features at a glance

Controller

Quiet running

We loved the Steam Controller, you might not (but you’d be wrong).

The Zotac Steam Machine runs whisper quiet, ideal for the living room.

18 LXF209 April 2016

A fantastic mini-gaming rig that’s waiting for the games and drivers to catch up.

power management and passive cooling to maintain operation in a small chassis without needing a whining turbine of a fan keeping things chilled. Even at full tilt in-game, the NEN remains remarkable softly spoken. That’s vital for a small gaming PC which has its sights set on the big-screen TV in your living room rather than the desktop of your home office. You don’t want a lounge-based machine roaring away in the corner every time you boot up a game of XCOM 2 to have your arse handed back to you by a ravening alien horde. Also, it’s just plain distracting.

PuffingOS

But if you want to get the absolute most from the impressive performance hardware then we’re going to have to have a quiet word about Steam OS. The pre-installed Debian-based distro had so much going for it. It can essentially turn this mini gaming rig into an easilyaccessible living room console by booting directly into Steam’s Big Picture Mode. With the bundled Steam Controller you don’t even need a mouse and keyboard to interact anymore. Sure, the controller is a bit plasticky, but it’ll do in a pinch. Then from Big Picture Mode you should have access to all your Steam games with the ability to play each and every one from the comfort of your sofa; a luxury hitherto only afforded our console-class brethren.

www.linuxformat.com

Except you kinda don’t. There is still a dearth of native Steam OS games and, though the number is growing, there’s no getting around the frustration of searching through your library having to skip over the vast majority that won’t run on your new £800 gaming rig. The big problem though is that you still get far greater gaming performance with Windows even if you do find a game that’s Steam OS compatible. In extreme cases the framerate halves, but across the board in our benchmarks the NEN dramatically slowed when running the Linux drivers. This is down to comparing unoptimised Linux games and drivers versus highly optimised Windows builds and drivers. But until Valve solves that Steam OS will always seem like the slower option. LXF

Verdict NEN Steam Machine SN970 Developer: Zotac Web: www.zotac.com Price: £800

Features Performance Ease of use Value

9/10 7/10 9/10 7/10

The promise of Steam OS is alive and this box is a perfect build to run it on, it just needs games to be optimised.

Rating 8/10


Linux distribution Reviews

Rosa Fresh KDE R7 Having failed terribly as a 21st century alchemist, Shashank Sharma appreciates this attempt to fuse KDE with cutting edge software. In brief... Rosa Desktop Fresh is aimed at enthusiasts and experienced users. It’s perfect for those who are fascinated with KDE’s flexibility and are proficient with Linux, and want the latest software and technologies to play with. See also: Mageia, OpenSUSE.

S

ince its first Linux distribution (distro) release, Rosa Labs has strived to provide an excellent KDE experience to users. Apart from server and desktop editions aimed at enterprises, it also produces the Desktop Fresh line of distros. Targeted at hobbyists as well as advanced users, it offers cutting-edge software. Developed with significant help from the community, the distro undergoes strenuous testing to ensure that any of its software doesn’t result in an unusable system. With a yearly release cycle, the Desktop Fresh line produces distros with different desktop environments, such as KDE, Gnome and LXDE etc. Based on the LTS ROSA 2014.1 platform, supported until autumn 2016, the distro features a highly customised KDE, the hallmark of every Rosa release. Available for 32- and 64-bit machines, the ISO image is just shy of 2GB and packed full of everyday apps, games and other bits to please home users and enthusiasts. The latest edition offers widely improved support for AMD and Intel graphics, which is essential for its Steam support. Also included is support for Microsoft Hyper-V, making it possible for users to deploy Rosa in public Azure Cloud. As a distro that strives to improve itself, the latest release also features a new rasteriser from Adobe and Google which greatly improves font smoothing and results in a more pleasant desktop experience.

Features at a glance

TimeFrame

StackFolder

A custom visualisation tool that can be used to work with images and various office document files.

A handy app for directly opening files in associated apps and previewing image and video files.

Still fresh, still cutting edge and the benchmark for new KDE distros to beat.

Since its first release, Rosa has focused on delivering a custom build of KDE. Apart from its many tweaks, Rosa also offers home-grown software such as TimeFrame and StackFolder, which have since found their way upstream into KDE.

Freshness guaranteed

With StackFolder, Rosa has developed a way for users to create quick access to frequently used folders and files. To begin, users only have to drag and drop files or folders from the Dolphin file manager onto a panel called the RocketBar at the bottom of the desktop, and select the StackFolder option. Once you create a stack folder, you can quickly browse the subdirectories or even preview thumbnails of photos and video files. The desktop also features the SimpleWelcome launcher, which makes finding the vast array of included applications a breeze. All the applications are grouped according to their functionality, such as Internet, Office and Graphics etc. The launcher is also more responsive in this version after dropping Plasma resulted in decreased memory usage. Along with SimpleWelcome, the TimeFrame content visualisation tool has been rewritten from scratch in QML. This relies on the metadata collating capabilities of Nepomuk and makes it possible to easily monitor

www.techradar.com/pro

activity at specific dates. It also features support for social networks and makes it possible to access Facebook without launching a browser. Along with these fine new additions, the distro also boasts several important bugfixes, such as installing the distro using USB3 devices, and improved support for Thinkpad USB keyboards. Thanks to its ample software repositories, the distro can easily be tailored to serve just about any function, be it an entertainment station for home users or workstation for developers, and while the distro is intended for experienced users, there’s no reason why it can’t serve new Linux users too. In fact, with its sizeable community its the perfect environment for new users to wet their feet. LXF

Verdict Rosa Desktop Fresh KDE R7 Developer: Rosa Labs Web: www.rosalab.com Licence: GPL and others

Features 9/10 Performance 9/10 Ease of use 9/10 Documentation 7/10 Sets the bar very high for KDE distros with an almost perfect release that is rock solid yet high customisable.

Rating 9/10

April 2016 LXF209 19


Reviews Linux distribution

exGENT Shashank Sharma tries out another Gentoo-based distribution, determined not to let his love for the distro’s ancestry cloud his judgement. In brief... A Gentoo-based live installable distribution with a lower bar of experience required from users, the distro hopes to attract into the fold users who had been too afraid to try Gentoo’s installation. The distro is full of applicationss and features the lightweight Xfce desktop. See also: Gentoo, Sabayon.

A

ll Gentoo-based live distributions (distro) strive to bridge the divide between a novice user’s familiarity with Linux and the requisite experience that would make it possible for them to play with Gentoo. What makes exGENT special is that it’s designed to attract new users to Gentoo without taking away the learning curve entirely. Available solely for 64-bit machines, the distro features a vast array of applications and software to satisfy even the most particular users. You can run the distro from a live disc or USB drive, and the simple installation process makes it possible for even novice users to get a working Gentoo system in a matter of minutes. However, on our dual-core test machine with 2GB RAM, booting into the live environment took several minutes. This is despite the developer claiming that the distro is faster than most of its peers. Once logged in, the distro undergoes a transformation and becomes blazingly fast. The Xfce desktop may appear bland for most users who have witnessed the recent releases of Gnome or KDE, but the speed benefit of a lightweight desktop becomes obvious within the first few minutes of running the distro. Gentoo is one of the best documented distros in the Linux ecosystem, but that’s not reflected by exGENT, which only provides limited instructions on its website. These include directions for installing the

Features at a glance

Effortless installation

Rolling release

The installation script and the hardware detection simplifies the most difficult elements of a Gentoo install.

Takes away the worry of having to constantly install a fresh system with each major release.

20 LXF209 April 2016

As a stable and incredibly fast distro, exGENT features all the characteristics that make Gentoo amazing, albeit with less documentation than we’d like to see.

distro onto the hard disk or a USB drive. There’s also some detail about the supported NVIDIA cards. As with its parent, exGENT requires you to manually create and format the root and swap partitions. To ease the process however, the distro ships with GParted, to help you carve space for the distro and format partitions.

Gentoo for the masses

The installation itself is run via a shell script, which takes care of all the rest. You don’t need to worry about package selection, configuring the timezone or your hardware. The distro also boasts excellent auto detection and configuration of hardware. All graphics cards, wireless cards and webcams, and other assorted hardware were flawlessly detected by exGENT and configured for use without any issues. It’s a shame that the script doesn’t provide a progress bar or any output on the screen to suggest that the distro is being installed, though. The final step in the installation is to configure the bootloader. For this, you need to refer to the installation instructions if you already have a bootloader installed or plan to install the default legacy Grub. What makes Gentoo such a pleasure to work with is its incredibly powerful package management system, Portage.

www.linuxformat.com

In its attempt to further ease users into Gentoo, the distro ships with a graphical front-end for this called Porthole. Although similar in appearance and functionality to many of its peers, such as the Synaptic, Porthole is incredibly fast and can help you easily install additional software if needed. New users will also appreciate the Settings Manager utility which provides a single point of access to tweakable elements. When compared to other distros, the only area where exGENT is lacking is in documentation, but otherwise it’s recommended for anyone who has been waiting on the sidelines for a perfect rolling-release distro. LXF

Verdict exGENT Developer: Arne Exton Web: http://exgent.exton.net Licence: GPL and others

Features Performance Ease of use Documentation

10/10 10/10 8/10 9/10

With its speed and configurability, this distro doesn’t compromise on anything that makes Gentoo so great.

Rating 9/10


Linux distribution Reviews

Nelum OS When an Ubuntu derivative, a minimalist distro and a Debian-based rolling distro walk into a bar, Shashank Sharma thinks it’s the start of a bad joke. In brief... Although based on the still under development Ubuntu 16.04, Nelum OS is lightweight and fast. With its set of default apps and the ability to play multimedia out of the box, the distro strives to provide a usable system for novice and inexperienced Linux users. See also: Lubuntu, Puppy Linux.

F

or a fresh entrant to the Linux ecosystem, choosing Ubuntu as its base is not a difficult decision. The Nelum project however, in an incredibly display of ambition and grit, has produced three distributions (distro). The distro we’ve reviewed, Nelum OS is based on Ubuntu 16.04 and is targeted at new Linux users. There’s also a version based on Debian Sid called Nelum Openbox, which is a rolling release featuring experimental software and thus aimed at more experienced users. The third variant, NelumBang is a self-proclaimed minimalist distro. Instead of a fully featured desktop environment, all three Nelum variants feature an Openbox window manager complete with a Conky system monitor pre-configured. Nelum OS is based on the development branch of Ubuntu 16.04. While this would suggest that the distro would be riddled with bugs, the final product is quite stable, with just the occasional non-critical quirks, eg when running the live system, Nelum fails to correctly identify your timezone, a feature most other distros have no difficulty with. What’s more, there’s no way for you to correct this error using the GUI. You must launch the terminal and run sudo date -s xx:xx to set the correct time. Thankfully this issue goes away once you install Nelum to disk. While the distro easily detected our graphics and wireless cards on our test machines, it failed to properly render

Features at a glance

Lightning fast Features resource-sensitive software, such as App Grid and Catfish, in addition to Openbox window manager.

Highly configurable

Includes Openbox and Conky, which provide the opportunity to greatly customise the distro.

The Nelum OS desktop environment features a dock at the bottom featuring everyday applications as well as a panel at the top.

the fonts on certain applications, such as the Terminal. Also, the VLC Media Player worked flawlessly with multimedia keyboards but the Nelum desktop itself didn’t respond to the special control buttons.

Work in progress

Apart from these issues, Nelum OS is riddled with some bothersome nuisances, eg the first installation screen points users towards the Release Notes, but clicking on the link takes you to a domain hosting/ purchase site. Additionally, while the distro can play various video and MP3 files out of the box, the sound vanishes for no apparent reason. To cap off our gripes, our attempts at launching any of the LibreOffice applications produced an error, and there’s no bug reporting mechanism. The distro also doesn’t have a mailing list, forums, wiki or much in the way of documentation. However, the Openbox menu does provide links to Debian and Arch documentation in addition to guides for Conky and Openbox. In a departure from most Ubuntu derivatives, Nelum doesn’t use Ubuntu Software Center. Instead it ships with Synaptic and App Grid. The latter is written from scratch as a Center

www.techradar.com/pro

alternative and provides ratings, reviews and screenshots for software and is incredibly fast and lightweight. For users unhappy with having to navigate application menus, the distro has a launcher with a search box to help you locate applications. Also included is the Catfish file search tool which can be locate any file on a system in seconds. Currently, Nelum OS doesn’t offer any compelling reasons for choosing it over other lightweight distros, such as Puppy Linux. Although it’s generally well put together, the distro has a long road ahead before it can even aspire to build a community of users around itself. LXF

Verdict Nelum OS Developer: Nelum Project Web: http://bit.ly/NelumOS Licence: GPL and others

Features Performance Ease of use Documentation

6/10 7/10 8/10 5/10

The project has spread itself too thin with three releases. With a long road of hard work ahead of it.

Rating 6/10

April 2016 LXF209 21


Reviews Linux game

XCOM 2 After last year, tackling illegal aliens in a strategy game turned out to be a slightly different job to the one Tom Senior thought it was. Specs Recommended OS: Steam OS, Ubuntu 14.04 64-bit CPU: Intel Core i7 RAM: 8GB GPU: Nvidia 960, 2GB VRAM (Intel and AMD are not supported) HDD: 45GB

Every squad member is valuable, and is painful to lose.

22 LXF209 April 2016

H

umanity seems doomed from the outset in XCOM 2. The game assumes you failed in your attempts to repel the alien invasion in Enemy Unknown. Now Earth’s citizens live a coddled life under the totalitarian control of the aliens and their co-opted soldiery, Advent. The resistance lives on only in the form of a few determined soldiers, scientists and engineers who have managed to repurpose a huge alien ship, the Avenger. This is your home. A detailed cross-section of the vessel lets you zoom into rooms to initiate research and building projects. A central cluster of rooms can be cleared out to build new facilities, and on the bridge you access the Geoscape, a map of the world that lets you choose where you want to park your spacecraft. To fight back, you must expand your reach from your lone starting territory by contacting nearby resistance groups. Time is frozen on the Geoscape, but once you park over an objective—make resistance contact; acquire resources; contact the black market—you activate a timer and spend days to claim it. This is nerve-racking. At any moment your scans can be interrupted by an alien attack, or a mission that will let you attack the aliens. You can choose to ignore some of these, but it’s not wise. Missions net you important resources, give your soldiers a chance to gain experience, and counter ‘Dark Events'—varied alien initiatives that, among many options, can half your income for a month, or send an interceptor out to hunt The Avenger. The game cleverly uses scarcity of opportunity to force you into difficult dilemmas. You need to recruit new

An ill-judged move or sheer misfortune can doom a soldier.

rookies; you need an engineer to build a comms facility that will enable you to contact more territories; you need alien alloys to upgrade your weapons. You can’t have all of these. You can probably only have one.

Doomed

Brilliantly, you even have to scan to collect your monthly cache of supplies, hidden in the landscape to escape alien detection. We left supplies on the ground for a week because we needed to recruit an engineer. We needed to hit an alien base to reduce the Avatar Project count—a doom clock that’s very bad news if it maxes out. We needed Advent corpses to get a vital armour upgrade. We needed a cup of tea because it was all getting a bit too much. This narrow series of opportunities fits the fantasy perfectly. You take whatever you can get. You’re scraping food and fuel out of the dirt to keep The Avenger in the air. The moment the timer freezes during a scan, you stop breathing. There’s a notification screen you have to click through to find out what is about to try to kill you. If you’re lucky, it’s the council getting in touch to give you a thumbs-up and tell you they’ve dropped some sandwiches for you in South America. If you’re unlucky you’ll be faced with XCOM 2’s equivalent of Enemy Unknown’s Terror missions. Combat is turn-based, and takes place on procedural battlefields that are uncannily well generated. The snowy forests, slums, city centres and alien

www.linuxformat.com

bases are varied both in decorative assets, such as sleek futuristic cars and fluffy trees, and in the vertical variation provided by cliffs and multi-storey buildings. They blow up nicely, too. Once concealment is broken, life becomes much more difficult. Successful shots are dictated by chance rolls, and you secure favourable odds by staying in good cover and flanking. A poor move or bad luck can wipe out a soldier, or take them out of action for days. Time-limited objectives to hack a terminal or rescue/ assassinate a VIP in a certain number of turns force you to be reckless. We won’t ruin the surprise and horror of the more advanced alien troops, but a couple left us in despair after a massacre. Thanks to your varying starting position, procedural missions and tactical depth, XCOM 2 can and should be played repeatedly. LXF

Verdict X-COM 2 Developer: Firaxis Web: https://xcom.com Price: £35

Features Performance Ease of use Value Exceptionally tough, rewarding strategy game and a masterful reworking of the XCOM formula.

9/10 8/10 8/10 8/10

Rating 9/10


LOST CIVILISATIONS, MYSTERIOUS PHENONEMA, AND MUCH MORE… ORDER NOW!

2UGHU RQOLQH DW bit.ly/mfm_s l /mfm mfm_sciencemysteries scien

RU ¾QG XV LQ \RXU QHDUHVW VXSHUPDUNHW QHZVDJHQW RU ERRNVWR ERRNVWRUH NVWRUH


dup

Every month we compare tons of stuff so you don’t have to!

Repair and rescue Don’t let a computer malfunction get the best of you. Mayank Sharma tests specialised distributions that help you restore order.

How we tested... We’ll begin by assessing the distros for the depth of their tools collection – are they truly one-stop shops for all kinds of repair and rescue jobs? We’ll also compare how effective they are in helping you prepare for an upcoming hardware calamity in addition to their life-saving abilities when disaster strikes. While comparing effectiveness is a complex metric that depends on various factors, we’ll focus on how useful a distro is in helping prepare the user for the job. We’ll also try to determine whether each distro has a strategy in mind in what tools are bundled or not. Since repair and rescue is a specialised system admin task and most of the tools could wreck havoc in the hands of a careless uninformed user, we’ll also highlight projects that help users with good documentation either on a website or inside the distro itself.

L Our selection Finnix Rescatux SystemRescueCd Trinity Rescue Kit

UltimateBootCD

24 LXF209 April 2016

inux is often – and correctly – touted as a stable and resilient operating system. But a distribution (distro) runs on hardware which fails more often than we’d like to believe and is operated by individuals who are prone to make silly mistakes, such as forgetting a password or inadvertently corrupting the boot loader. Whether you’re beating yourself up for accidentally deleting important files or cursing the hard disk for corrupting system files and rendering the system non-bootable, there are specialised

“Distros designed to put all the applications you need for repair and restoration in one place.” tools that can help you rectify and fix the issue. While you can find many of these tools in the repositories (repos) of most Linux distros, zeroing down on the right tool for the job can be a cumbersome experience given the myriad of choices. This is where a repair and rescue distro comes into play. These specialised distros are designed

www.linuxformat.com

to put all the applications you need for computer repair and restoration in one place. The popular ones go one step further and even have customised UIs that present the tools and their features in an organised manner. In this Roundup we’ll look at some of the best ones for putting you at ease when your computer starts acting up.


Repair and rescue distributions Roundup

Tools repository What are they packing?

F

innix is one of the smallest distros in the Roundup and despite weighing in at just over 400MB, its tool list remains pretty comprehensive. The distro has some larger utilities, such as partimage and smart boot manager but it’s also chockfull of smaller tools. There are tools for accessing and maintaining all of the major filesystems as well as those for restoring, backing up and repairing damaged volumes. There are fairly straightforward tools to enable normal network access and set up Wi-Fi, and it’s possible to set Finnix up as a file server using FTP, Samba or NFS. There are also more esoteric tools for network diagnostics and monitoring. However, the smallest distro in the Roundup is the 150MB Trinity Rescue Kit (TRK), which ships with no graphical desktop environment. The distro doesn’t subscribe to the ‘one tool for one job’ philosophy of some of its competition and, eg, ships with five different virus scanners, and as well as the usual gamut of repair and recovery

tools TRK can also clone computers over the network using multicast. Ultimate Boot CD (UBCD) is another distro that ships with no graphical desktop and just like TRK doesn’t subscribe to the concept of bundling a single tool for a single task. The goal of the 600MB distro is to pack in as many diagnostic tools as possible into one bootable CD. To that end, UBCD includes tools to: restore BIOS settings; stress test the hardware as well as various boot managers; disk management tools; and a host of other useful utilities. The other 600MB distro, Rescatux, ships with a minimal graphical desktop and bundles all the important and useful tools to fix several issues with non-booting Linux and Windows, including testdisk, photorec and GParted etc. You can use Rescatux to restore bootloaders, repair filesystems, fix partition tables and reset passwords on both Linux and Windows installs. Last but least, The Gentoo-based SystemRescueCd ships with a minimal

You can use the Ultimate Boot CD to query and stress test hardware and peripherals connected to a system.

Xfce desktop and a handful of graphical applications, such as the Midori web browser and the ePDF viewer. Yet the 450MB distro packs in several repair and rescue tools and utilities and comes with support for all the important filesystems, including ext4, xfs, Btrfs, NTFS and reiserFS as well as network filesystems, such as samba and NFS. Furthermore, the distro also includes disc images for several tools, such as FreeDOS, Gag, the graphical boot manager and Ranish Partition Manager etc.

Verdict Finnix

+++++ Rescatux

+++++

SysRescCd

+++++ TRK

+++++ UBCD

+++++ Stick to Rescatux and SysRescCd with their GUIs.

Customisability Can you make it your own?

T

he Debian-based Finnix includes several options to customise the distro, including overlays and a remastering script. The website has detailed instructions on editing the layout of the ISO and using the custombuilt scripts to repackage the distro.

Despite the instructions, this procedure is only meant for advanced users. Rescatux is also Debian-based and includes the Synaptic package manager for installing additional software, but there’s no mechanism to save the changes to the distro.

Gentoo users will be able to spin customised versions of the SystemRescueCd with little effort if they follow the comprehensive guide supplied.

In response feedback, SysRescCd’s website hosts a detailed step-by-step guide that lists the procedure for adding custom packages and building a new ISO. The distro comes with four kernels and there’s also a guide on how to build SysRescCd with your own kernel. Unlike the Finnix guide, SysRescCd’s is more detailed and can be used by relatively inexperienced users as well. Like Finnix and SysRescCd, the UBCD website has a guide that details the procedure for decompiling the ISO image, customising the environment – by adding your own disc images and FreeDOS-based apps – and then compiling a new ISO image. TRK ships with a utility that turns the current running environment into an ISO image. There’s also a guide on the website that lists both the process and areas you need to modify to create a customised build.

www.techradar.com/pro

Verdict Finnix

+++++ Rescatux

+++++

SysRescCd

+++++ TRK

+++++ UBCD

+++++ You can customise most distros with ease except Rescatux.

April 2016 LXF209 25


Roundup Repair and rescue distributions

User experience How does it all tie together?

Y

our distro’s software repos is replete with tools that help fix errors and rescue Linux and Windows installs and the data inside them from disaster. The distros we’ve covered in this Roundup package the best and most useful of these tools in one

convenient package. However, having all the tools in one place would only be of use to expert users with some prior experience with the tools. The average user will need some handholding: first to get to the tools and then to use them. That’s because many of them

lack a graphical interface and if used improperly can make a bad situation worse. For this reason, we’ll pay special attention to distros that have spent time designing interfaces to aid the user and give more emphasis to built-in documentation..

Finnix ++

The distro’s boot menu gives several boot options. It defaults to 64-bit, but it also offers the option to load the 32-bit kernel. The remaining options will take you to a FreeDOS prompt, the Smart Boot Manager or initiate the hardware detection tool. Using the default option, the distro boots very quickly and after going through the process drops you at the command-line. This might come as a surprise to first-time users, especially as there’s no indication of how to proceed from there. Finnix’s tool list is comprehensive, but its usability is limited thanks to it being a command-line only distro. The project’s website isn’t a huge help either in this regard. There’s some general documentation on some aspects of the distro, but the lack of a getting started type guide is quite an omission and a stumbling block for new users.

Rescatux +++++

Upon booting, the distro takes you to the minimal LXDE-powered graphical desktop and automatically fires up its custom helper application called Rescapp. The application’s interface has improved through the releases, and in its latest version hosts several buttons divided into various categories, such as Grub, Filesystem and Password. The buttons inside each category have descriptive labels that help identify their function. When you click a button, it brings up the relevant documentation which explains in detail what steps Rescatux will take and what information it expects from the user. After you’ve scrolled through the illustrated documentation and know what to expect, you click the button labelled ‘Run!’ to launch the utility. Advanced users can bypass Rescapp and fire up a terminal and access the rescue tool directly. The distro also has a plethora of video guides on its website.

Support & rescue docs Because you surely need some handholding.

Verdict

F

Finnix

innix hosts documentation on some aspects of the distro such as: the various boot parameters; sparse guides on how to restore bootloaders; and how to use the distro for computer forensics. Most of the information is written to guide experienced users and the website lacks a beginner’s guide. Similarly, the tutorial section on the Ultimate Boot CDs website isn’t well organised. It points to a list of user-contributed tutorials curated from the forum

26 LXF209 April 2016

boards, and the documentation on the project’s wiki isn’t of much help either. Most of it is incomplete and isn’t intended for new users. The documentation on the websites of the other three distros doesn’t disappoint, however. Rescatux hosts lots of guides and instructional videos. There’s also some documentation to help inexperienced users get started with the distro. SysRescueCd has a quick start guide as well as detailed instructions on basic and advanced use.

www.linuxformat.com

The website also hosts instructions for experienced campaigners, such as the guide to make a custom version of the distro and back up data from an unbootable Windows computer. The documentation section of TRK’s website begins with a quick start guide before moving on to detailing the capabilities of its bundled tools and custom utilities. There’s also a very helpful section that details the procedure to follow when faced with certain situations, such as dying disks.

++++++ Rescatux

+++++

SysRescCd

+++++ TRK

+++++ UBCD

+++++ Rescatux, SysRescCd and TRK all have howtos and tutorials.


Repair and rescue distributions Roundup SystemRescueCd +++

The distro has one of the most comprehensively laid out boot menus we’ve seen. Besides the dozen options listed on the main screen, there’s help information explaining the various advanced boot parameters over the several virtual consoles. The default boot option boots into a console which lists some basic commands to configure the network, mount NTFS partitions and start the graphical environment. However, inside the JWM-powered minimal desktop you’re pretty much on your own. The distro doesn’t bundle any help documentation and new users will need to refer to the documentation on the project’s website. You’ll also have to manually invoke the rescue and repair tools, much like Finnix, although you do get the comforts of a graphical desktop and a web browser to access the introductory information on the project’s website.

Trinity Rescue Kit ++++

In a similar way to SysRescCd, TRK has elaborate boot menu options and lists about two dozen options. The default boots you to a custom text-based menu. Although TRK is a command-line distro, the customised menu does a wonderful job of helping the user locate and run the right tool. The listed options are also descriptive enough to point the inexperienced troubleshooter towards the right tool. The initial options bring up user documentation built into the distro, while some options go direct to online documentation on the website via the Links web browser. Several of the menu items bring up more options for tweaking the behaviour of specific tools. The sub-menus also point to help files for a tool, which basically brings up the man page. Advanced users can drop down to the shell and invoke the tools manually.

Ultimate Boot CD ++++

Despite packing a large number of tools, the UBCD boots in a snap. Unlike some others here with elaborate boot menus, this distro boots directly to its customised text-based menu. The menu divides the packaged utilities in categories based on the area of the computer that they influence, such as BIOS, HDD and memory etc, and each category drills down into entries for the individual tools. Some categories, eg HDD, are further divided into different tasks, eg such as Data Recovery, Disc Cloning and Disk Wiping etc. As you scroll over each tool inside any category, the menu will display brief but pertinent information about the tool. Just as with TRK, the menu might be text-based but the clear categories and helpful descriptions mean that the distro can be easily navigated and used by inexperienced users to fix all kinds of issues.

Security features Verdict

Can they help you maintain privacy?

W

hile the primary purpose of each of these distros is to help you recover from disasters, they also bundle utilities to secure your system and prevent privacy leaks. In Finnix, you’ll find the command-line wipe utility for secure file deletion and GNU Privacy Guard (GPGV) for verifying GPG signatures. Rescatux also ships with GPGV and includes shred for permanently deleting files. SysRescCd bundles several tools to ensure that files are permanently

erased from the hard disk. There’s also chkrootkit which searches for rootkits. Finally, there’s the md5deep utility which is used in the computer security, system admin and computer forensics communities to run large numbers of files through any of several different cryptographic digests. As previously mentioned, TRK has several virus scanning engines, including ClamAV, F-Prot, BitDefender, Vexira and Avast. The distro also includes a tool to make md5sums of all

the files on the system and the winclean utility to clean out unnecessary files, such as temp files from a Windows based computer. Like the number of tools for its primary purpose, the UBCD contains several tools for security and privacy as well. Besides the ClamAV and F-Prot scanner, it has tons of tools for wiping the disks including DBAN, HDDErase, HDShredder for all kinds of filesystems and several low-level utilities for securely wiping various hard disks.

www.techradar.com/pro

Finnix

+++++ Rescatux

+++++

SysRescCd

+++++ TRK

+++++ UBCD

+++++ SysRescCD, TRK and UBCD pack in several tools to help with privacy.

April 2016 LXF209 27


Roundup Repair and rescue distributions

Custom tools and UIs Do they go the extra mile?

T

here’s more to creating a successful repair and rescue distro than just packing in as many tools as possible. It’s the small tweaks and customisations that help the distro stand out from the crowd. A good example of this is one of Finnix’s custom features: a script to ease setting up netbooting of the distro from another machine. The script

generates an appropriate initrd, which includes network and NFS modules. The utility will also set up NFS and TFTP services, so that a running Finnix session can become a netboot server. The distro also includes a utility called finnix-hwsubmit which enables you to submit specification details about your hardware platform to the developer to help with bugfixing.

The Trinity Rescue Kit has integrated a whooping five different virus scanners into a single command-line custom utility.

SysRescCd has introduced support for loading SRMs (System Rescue Modules). These are squashfs filesystems that contain extra files that are part of the system. This feature is useful for adding new applications to the distro. In addition to applications, the modules can also be used to add custom data files. There’s also the sysresccd-backstore utility which can be used to build a loopback filesystem where you can save custom files. As we’ve mentioned previously, Rescatux, TRK and UBCD all boot to custom text-based or graphical menus that help guide a user to the relevant utility for their system issue. In Rescatux, the menu leads to custom interfaces for command-line utilities that employs a wizard to step through the various stages of the process. Similarly in TRK, the menu can be used to configure and run several tasks, such as scanning for viruses and resetting Windows passwords via customised text-based wizards. The distro also contains a user-contributed backup application called pi and custom scripts to ease several administration tasks, such as the ability to find and mount all local filesystems.

Verdict Finnix

+++++ Rescatux

+++++

SysRescCd

+++++ TRK

+++++ UBCD

+++++ If you’re not familiar with recovery tools, it’s best to avoid Finnix.

Healing capabilities What ailments can they fix?

W

hether you’ve erased the master boot record (MBR) or locked yourself out of your account, the tools in these distros will help you out of a tricky spot. But some distros are more adept than others. Finnix, eg, is loaded with tools. It can do everything from resetting the root user’s password to managing, restoring and repairing damaged filesystems and volumes. But the distro can’t be used to restore the Grub 2 bootloader. Rescatux has no such limitations: it can repair all sorts of bootloaders; help you restore the MBR; reset passwords on both Windows and Linux installs; and repair all the popular filesystems. The distro also ships with tools to rescue data and restore files and can wipe both Windows and Linux installs.

28 LXF209 April 2016

SysRescCd can save and restore a partition table, and save the contents of a filesystem to a compressed archive file using FSArchiver. Besides recovering deleted files and detecting and cleaning viruses, it can also image and clone disks; identify and list damaged sectors; and extract recoverable data from The boot menu of SystemRescueCd offers physically damaged disks. suggestions on troubleshooting boot problems. This version of TRK has an improved Winpass tool, which can be used to restore BIOS settings as well easily reset Windows passwords. Of as restore and back up the CMOS. It course, you can also use TRK to recover can fix bootloaders, recover lost and undelete files and even recover lost passwords and deleted files. It will also partitions. However, it’s the UBCD that help you tweak the Windows Registry offers the most mileage. The distro can without booting into the installation.

www.linuxformat.com

Verdict Finnix

+++++ Rescatux

+++++

SysRescCd

+++++ TRK

+++++ UBCD

+++++ True to its name, UBCD out-scores the others for repair and rescue tools.


Repair and rescue distributions Roundup Ruling repair and rescue distro

The verdict J

ust to reiterate, this Roundup wasn’t a measure of the distro with the most number of tools. If it were, Ultimate Boot CD (UBCD) or Finnix would have probably trumped the others. Instead our objective is to find the distro that can be used by a largest number of people irrespective of their admin skills and prior experience. In that respect, Finnix loses out because the distro is designed for advanced users, as well as being a command-line only. You can get a lot done with the distro but only if you are someone who knows exactly what they are doing. Similarly, SystemRescueCd is designed for people with the technical expertise to digest the information in its boot menu and on its website. TRK loses out because of its uncertain future. The distro hasn’t been updated for a while but it still works well. Its customised menu, despite being text-based, makes the distro accessible to a large number of people.

1st Rescatux

This boils down to a close contest between UBCD and Rescatux for the top spot. While the former has a larger number of tools, the latter is more welcoming to first-time and inexperienced users. And when it comes to novice users giving them a lot of options it isn’t always the best strategy since they might end up picking the wrong tool for the job. Considering these factors, we’ll have to award this Roundup to Rescatux, since the distro is designed to be used by people who should be able to use the tools without spending a vast amount of time attempting to wrap their heads around the many details. The credit for Rescatux’s win goes to its awesome Rescapp utility along with its custom interfaces for the command-line tools, and the wellillustrated built-in and online documentation.

4th SystemRescueCd

Web: http://bit.ly/Rescatux Licence: GNU GPL Version: 0.40b5

+++++

Web: www.sysresccd.org Licence: GNU GPL Version: 4.7.1

The most user-friendly distro for wiggling out of a tricky situation.

+++++

Web: www.ultimatebootcd.com Licence: GNU GPL Version: 5.3.5 True to its name, it’s stuffed with tools for repair and rescue tasks.

3rd Trinity Rescue Kit

Rescatux might lack the depth of tools found in some distros such as UBCD, but it does a wonderful job of getting users to use the tools that it does include. Additionally, the developer is fairly active and responsive to requests for fleshing out the distro.

“The credit for Rescatux’s win goes to its awesome Rescapp utility along with its custom UIs.”

+++++

2nd Ultimate Boot CD

It might look less aesthetically polished but you can bet on Rescatux to save your day and data.

+++++

Web: www.trinityhome.org/trk Licence: GNU GPL Version: 3.4

A wonderful distro that works for now but has an uncertain future.

A good option for the experienced users that can digest the docs.

5th Finnix

+++++

Web: www.finnix.org Licence: GPL and others Version: 111 Loses out for its lack of any custom tools to assist users.

Over to you... Have you ever lost sleep because of a computer malfunction? Let us know how you saved the day at lxf.letters@futurenet.com.

Also consider... There aren’t many other distros designed for repair and rescue applications besides the ones that we covered in the Roundup. The Italian PoliArch distro does look like a nice addition to our list. However, it’s homepage and most of its documentation is in Italian which heavily limits its user base. The other option is to use live CDs built around one

particular tool. Some of the popular rescue and repair tools make themselves available in finely tuned, minimalist live CDs. If you only need to use that one tool, you can always use the live CD version instead of an elaborate general-purpose rescue distro. An example of this is the Boot-Repair tool, which has a live CD that’s excellent for

www.techradar.com/pro

repairing and fixing all sorts of bootloaderrelated issues. In addition to Boot-Repair, the distro includes the GParted and OS-Uninstaller applications. GParted has also released its own live CD which includes the TestDisk utility. You can find TestDisk on a number of other distros, such as Hiren’s Boot CD, which has several tools for fixing Windows installations. LXF

April 2016 LXF209 29


Subscribe to Choose your Print £63

For 12 months

Every issue comes with a 4GB DVD packed full of the hottest distros, apps, games and a lot more.

Get into Linux today!

Get into Linux today!

package

Digital £45

For 12 months

The cheapest way to get Linux Format. Instant access on your iPad, iPhone and Android device.

& On iOroSid! And

Bundle £77

SAVE 48%

For 12 months

Includes a DVD packed with the best new distros. Exclusive access to the Linux Format subscribersonly area – with 1,000s of DRM-free tutorials, features and reviews. Every new issue in print and on your iOS or Android device. Never miss an issue.

30 LXF209 April 2016

www.linuxformat.com


Get all the best in FOSS Every issue packed with features, tutorials and a dedicated Pi section.

Subscribe online today‌ myfavouritemagazines.co.uk/LINsubs Prices and savings quoted are compared to buying full priced UK print and digital issues. You will receive 13 issues in a year. If you are dissatisďŹ ed in any way you can write to us at Future Publishing Ltd, 3 Queensbridge, The Lakes, Northampton, NN4 7BF, United Kingdom to cancel your subscription at any time and we will refund you for all un-mailed issues. Prices correct at point of print and subject to change. For full terms and conditions please visit: myfavm.ag/magterms. Offer ends 12/04/2016

www.techradar.com/pro

April 2016 LXF209 31


Embed Linux

POWER UP! Jonni Bidwell sees Linux everywhere – because it is. Our favourite operating system turns out to be much more popular than some naysayers give it credit for.

esktop Linux machines are in the minority, it’s hard to get an exact figure, but most surveys estimate usage to be less than 3%. Linux usage on Steam tends to hover around 1%. And that’s OK as there isn’t really a compelling need for Linux to be at the top of those tables, it’s continued existence isn’t threatened just because people – inexplicably – still need to run Microsoft Office so that their TPS reports can be filled out correctly. Indeed, many users enjoy being in the minority, some even deign to look down on the desktop masses.

D

But desktop Linux was only ever a sideeffect of Linux’s unparalleled growth in other areas. For servers and supercomputers, Linux’s reliability, customisability and lack of a price tag

But Linux is more pervasive than this, if you were to look closely enough, you will find Linux embedded in all kinds of places: from home routers, smart TVs and Blu-ray players to electronic signage, vehicular ECUs and industrial control systems. Since Linux has been ported to so many different CPU architectures it is an attractive option for manufacturers. The alternative to using Linux is having to develop (and maintain) a bespoke OS, which is at best re-inventing the wheel and at worst costly, risky and time-consuming.

“If you look closely enough, you will find Linux embedded in all kinds of places.”

32 LXF209 April 2016

made it an ideal fit, and here it has been dominant since, respectively, the late ‘90s and early Noughties. There’s also the small matter of two billion Android smartphones.

www.linuxformat.com


Embed Linux

Linux everywhere

Linux is powering everything from routers to MP3 players to printers. Find out how it got there and why it’s used. ith so many manufacturers choosing Linux it’s not surprising that a number of specialist distributions (distros) aimed at the embedded market have come into being. You may have heard about DD-WRT [see Tutorials, p80, LXF198] and OpenWRT which can be installed on an impressive array of different routers. Doing so may provide users with a prettier interface, extra features, but most importantly ameliorate the situation of manufacturers being lazy with respect to offering security updates. DD-WRT was born as a consequence of LinkSys being naughty and using customised Linux drivers and components in its WRT54G series of routers, without releasing any source code, as is required by the GPL. After some sternly worded emails LinkSys complied and released the code. This was first picked up by Sveasoft, who released its customised Alchemy and Talisman firmware. When Sveasoft decided it wanted to start charging money for ‘community support', DD-WRT (www.dd-wrt.com) forked the latest release and has been offering it for free every since. Besides routers, another successful project is Rockbox (http://rockbox.org) which brings Linux to your MP3 player. Firmware is available for an impressive array of devices: from expensive iPods to Sandisk’s cheap and cheerful Sansa range. Again, these add new features in the form of: additional or more robust file format support; improved performance; support for Unicode, as well as the ability to play Doom (see the It Runs Doom box, below). Rockbox was first introduced in 2002 by reverse engineering the (buggy) firmware for the Archos Player. Installation on all devices involves replacing the bootloader. Once this is done (which can be a little tricky since Apple products require a signed image) the Rockbox OS can be installed to the device’s main storage, which makes it easy to perform updates or add extras. The Yocto project (www.yoctoproject.org) exists to simplify the process of getting Linux onto embedded devices. Yocto isn’t a distro in itself, but rather aims to streamline the process of making distros customised for a particular device. One of Yocto’s key components is the BitBake build tool. This is inspired by Gentoo’s package manager, Portage, and builds customised images from source using tasty Recipes. Yocto now incorporates the OpenEmbedded build system, which in turn was based on a technology called Buildroot, which is still

W

actively developed. Buildroot comes preconfigured for targetting off-the-shelf hardware such as the Raspberry Pi, and is used by OpenWRT. Another project to arise from OpenEmbedded (and others) is the Angstrom Distribution, which aims to be versatile and user friendly. If it needs to fit into 4MB of flash storage, then it can do so; If it needs to be more like a fully-featured distro then it can do that, too. It’s well supported on single board computers, including Intel’s powerful MinnowBoard. At the heart of Linux lies the GNU C Library (glibc). Pretty much any package on your computer will depend on this, since it contains all the low-level stuff that compiled C/C++ code relies upon. As a result it is something of a behemoth and is generally not suited to embedded computing. However a number of diet glibc variants (including dietlibc) now exist for use in embedded settings. One of the most common is uclibc, which doesn’t even need a memory management unit to work and so can be ported to all manner of low-power

The iconic Linksys WRT54G started the homebrew firmware movement back in 2003.

“A number of specialist distros aimed at the embedded market have come into being.” devices, particularly microcontrollers. Uclibc hasn’t seen a new release since 2012, but the spin-off uclibc-ng project is under active development. Android uses the Googledeveloped Bionic C library, which is again suited to smaller, less powerful machinery. However, another major reason for its development is to isolate the proprietary parts of the Android ecosystem from copyleft issues arising from the use of the Linux kernel. Bionic is BSD-licensed, so derivative works, which by some tenuous reasoning includes Android apps built against it, are not obligated to provide source code.

It runs Doom PrBoom is one of a few source ports of Doom (whose source code was released back in 1997) that’s available for Windows, Linux and many other platforms. It’s much more capable and customisable than the original, while still retaining compatibility with the original WAD files (so that levels and graphics from the original can be

used). With Linux now running in so many unorthodox places, it has become something of a trope to get PrBoom onto these devices too. One of our favourite examples of this was Michael Jordon getting it running via the LCD screen on a Canon Pixma printer. A security flaw in the firmware update mechanism enabled

the glorious hack to take place (http://bit.ly/PixmaDoomed). Beyond that, various flavours of Doom run on RockBox (although it looks rather odd in monochrome), graphical calculators, digital cameras, oh and even an oscilloscope (except this was running Windows 95 so we don’t approve).

www.techradar.com/pro

Doom running on a tiny mp3 player.

April 2016 LXF209 33


Embed Linux

Honey, I shrunk the Linux

Embedded Linux is still Linux, but in general it’s a different animal to what’s running on your desktop. Let’s see how we get from one to the other. ust because there are so many Linux devices out there it doesn’t mean that you can plug in a keyboard, network cable and monitor and start hammering in Bash commands. For one thing, there’s nowhere to plug in such peripherals, for another embedded Linux is all about minimalism, so there may be no support for PS/2 keyboards, framebuffers or networking stack in the kernel. The calibre of the hardware may also be very different from your typical multi-core desktop PC—most embedded devices are based around the ARM or MIPS architectures and generally have exactly the minimum amount of RAM required for them to function. Storage can be severely restricted too, eg many home routers have less than 2MB of NVRAM, so

J

“The goal is to see what we can get away with removing before everything breaks.” aspiring OpenWRT users sometimes have to use a special minified image. Remember how MS-DOS 5 shipped on three floppy disks? Well, it turns out it’s still possible to fit OSes in small places, as long as the ‘operating’ that’s expected of the ‘system’ is narrowly defined. Let’s carry out a quick thought exercise to see how much we can slim down Linux. We’ll start with a generic GUI-free Linux install: It could be Arch or Debian or OpenELEC; it doesn’t matter—they all have an installed footprint of less 1GB. The goal is to see what we can get away with removing before everything breaks. Starting with the low-hanging fruit, we can divest ourselves of any unnecessary packages. Many readers will have been tempted to try this on their desktops, but (particularly on Ubuntu) the results tend towards two outcomes: First,

everything breaks and, second, the package manager starts recommending re-installation of everything that was removed as well as the installation of all manner of other stuff. These two outcomes are also not mutually exclusive. But that’s not to say that the approach, in general, isn’t sound, merely that desktop packages have complicated dependency webs and shouldn’t be antagonised. Exactly what we can remove depends on what we had to start with, and what we need: we might decide that we don’t need all the userspace tools for exotic filesystems, RAID and logical volume management. Being mindful that this is all hypothetical, we can continue in this vein. Many distros ship with Perl and Python, and both are ripe for the chop. All this might save a few hundred megabytes, and although we could happily continue this package removal exercise we’d largely be met with diminishing returns. So now we change tactic and make changes of which the package manager wouldn’t approve.

Cutting the flab

Documentation is an easy target—the /usr/share/doc and /usr/share/man directories probably won’t be overly full on a clean install, but still we have no use for them. Likewise we can get rid of any unneeded glibc locales, since internationalisation isn’t high on our list of priorities. Removals such as these would be undone whenever a package update takes place, but let’s pretend that our device won’t do that. We could, in fact, do away with package management altogether if we’re brave. For better or worse, many embedded devices are shipped with the intention that their OS will never see any software updates or security patches. For devices that aren’t going to be networked or subject to arbitrary user input this isn’t really a problem; for new-fangled Internet of Things gadgets this is downright lamentable. Be that as it may, we can continue with deleting any downloaded packages from /var/cache. Baseless

Fast booting For certain systems, a rapid boot is essential – imagine having to wait 20s before your defibrillator even says ‘charging’, eg. Desktop users have been spoiled by improved power management over the years, so slow startup gripes are obviated by just using software suspend, from which the machine can awake nigh on instantly. Unfortunately this isn’t a luxury that’s afforded to embedded devices—even if they could go to sleep they might still be drawing too much power than their surroundings can cope with. As a result much work has gone into speeding up boot times, and the fruits of that

34 LXF209 April 2016

endeavour is that with the right hardware and the right optimisations, Linux can quite happily boot in less than a second. There are a few easy gains to be had here (removing unused features, not verifying the kernel image, silencing boot output), a few that are analogous to the rituals carried out by desktop fast booters (using bootchartd or systemanalyze to identify slow-starting services) and a few that require lots of research and effort (eg writing a bespoke bootloader for your hardware). Linutronix’s Jan Altenberg has provided an excellent video summary here: (http://bit.ly/BootOneSecond).

www.linuxformat.com

Systemd-analyze can produce these ever-so-useful bootcharts so that you can waste hours making your system boot a few fractions of a second faster.


Embed Linux

Our Arch Linux install is occupying 28.3GB. This glut on space wouldn’t be tolerated in the embedded Linux world.

Systemd bashing never seems to go out of fashion, but for embedded devices it can be overkill. We’ll still need a nominal init system, we might, at a pinch, still need a service manager, but we don’t need all the other things that Systemd does. So we could shoehorn a homebrew init on there and lose 30MB of Systemd. We could go further and replace the init system, udev, Bash and the core Linux utilities with Busybox. We could go further and do away with a conventional init altogether and have a script that boots straight to the device’s main application. By this stage our franken-Linux barely resembles its origins, so we may as well go straight for the jugular and start messing with the kernel. Again, this is hypothetical stuff—we probably don’t want the whole compiler toolchain installed on our OS, so our optimised kernel would have to be crafted and (cross) compiled on another machine.

Trimming the kernel

related Tiny Core Linux projects here, which will happily squeeze a fully working Linux into one tenth of this space, but these tend to cheat and decompress things into RAM, which wouldn’t be an option for many devices. So let’s continue. Having messed with the userspace, the init system and the kernel we now square up to the bootloader. Grub is pretty amazing but it has no place in the embedded world. Here the tool of choice is Das U-Boot. This is designed for speed and portability and has been ported to numerous devices. Unlike for x86 PCs, where the BIOS (or latterly UEFI firmware) takes care of initially putting hardware into a sane state, U-Boot is typically the first software to be loaded on embedded systems. As such it is often subject to even more constraints, often having to be squeezed into flash storage, which in some cases can be as little as 128KB. It does, however, have the advantage, at least on ARM architectures, that the rigmarole of detecting hardware is entirely circumvented. This is because U-Boot is complemented with a so-called Device Tree, which conveniently tells the OS everything about all the hardware attached to the system and the drivers required thereby. Such a thing doesn’t make sense for desktop machines, where any component can be happily replaced, typically by something which requires an entirely different driver, or moved. In fact hardware detection and enumeration in Linux can be slightly non-determinist—hard drives are detected in the order they are powered up, so what was /dev/sda on one boot might be /dev/sdb the next. A similar thing happens for network interfaces, which is why we use partition UUIDS and persistent device naming nowadays. Not having to worry about all this uncertainty makes things much simpler for embedded systems.

Embed Linux on a tablet see page 70

Most distros package their kernels with all supported drivers compiled as modules—if a piece of hardware is detected, the relevant module is loaded. But if we know that we’re never going to change our hardware, then we can happily get rid of all those unneeded drivers. The linux and linux-firmware packages together occupy around 200MB. Obviously, we still need a kernel, and we might need a couple of firmware files, but most of the contents of these packages are useless to us. So our first goal would be to compile a custom kernel using only the drivers we require. We could even compile the drivers we do need into the kernel, which will improve boot time. Once we’ve checked this works we can delete the packages, and we’ll have shaved 150MB off the install. We could further shrink the kernel if we don’t need any network functionality. Following these pointers, one would probably end up with a total OS footprint of around 500MB, which isn’t at all bad, but people have done better. It would be remiss of us not to mention Damn Small Linux (currently dormant) and the

www.techradar.com/pro

April 2016 LXF209 35


Embed Linux

Drones, everywhere drones One of the most exciting and popular applications of embedded Linux today is for the development of hobbyist drones. t one time, the mention of Unmanned Aerial Vehicles (UAVs) largely referred to the work of the CIA’s Special Activities Division in the Middle East. Nowadays, they’re becoming more involved in civilian life. You might hear one disturbing the peace and quiet of a Sunday stroll or perhaps you’ve heard that they’re being used to smuggle ‘care packages’ into prisons. But they have other uses too: in parts of China couriers use drone deliveries to drop off your package at the nearest corner store. You’ll be told exactly when and where it’s arriving, which is considerably better than delivery networks here (who often don’t even seem to be able to ring a doorbell). Owing to pesky rules about what you can put in the sky, we’re unlikely to see such deliveries in the UK anytime soon. But thanks to increased demand for recreational UAVs, we certainly are going to see more and cheaper drones on the market and, soon afterwards, in the air. There are all manner of ready-to-fly drones available at your local stockist. There are lots of mini-drone models available, but lets restrict ourselves to proper machines, ie ones designed for outdoor use and with a battery life not measured in seconds. In this

A

league, an entry level quadcopter will set you back around £400 while at the high-end you could happily end up forking out ten times this amount. At this extreme, you’ll find truly glorious flying machines, with 1080p gimbal-mounted cameras that can rotate through 360 degrees about three axes, all the while streaming (albeit heavily compressed) video in near real-time back to base. Such a creature will be able to fly for up to 30 minutes before returning to base, which it would do of its own accord thanks to autopilot technology. Indeed, you could even program a course before take-off by specifying some GPS waypoints.

Lifting off with Linux

Keeping an object airborne is tricky, especially when there’s a bit of a breeze. For a drone to successfully remain airborne it must be able to respond terribly quickly to changes in flight conditions. As soon as it detects any undesired rotation about a given axis, the relevant motors must accelerate immediately in order to right the aircraft, lest it tumble out of the sky. Corrections such as these may occur several thousand times per second, and are governed by the drone’s flight controller.

3DR’s solo drone is one of the finest on the market. Besides the Pixhawk 2 autopilot, it also has a secondary Cortex A9 board running Linux for additional brainpower.

The Dronecode Project In October 2014, under the aegis of the Linux Foundation, and in conjunction with industry leaders 3D Robotics and Yuneeq, the non-profit Dronecode project was inaugurated. This is a collaborative effort with industry partners to create a shared open-source platform to benefit Linux-based UAV software. Aspring airborne software makers can be rewarded for contributing to the platform in a meritocratic manner. Currently there are over

36 LXF209 April 2016

1,200 developers working on Dronecodeaffiliated projects. Dronecode aims to standardise protocols (such as MAVLink), flight code (such as APM and PX4) and ground control software (QgroundControl, APM Planner, etc.) with the goal of advancing and improving access to the Linux-based drone ecosystem. As the hardware on board the vehicle becomes more capable, Dronecode aims to steer development of

www.linuxformat.com

advanced features, such as video streaming (the Pi 2 has greatly improved this situation), collision detection and computer vision. Such features can’t come soon enough if recent plans pioneered by the Dutch National Police come to fruition. They propose using highly-trained eagles (no, really: http://bit.ly/EagleSquad) to neutralise errant drones, and the Met have already expressed an interest in weaponising their own convocation of feathered guardians.


Embed Linux

The flight controller is armed with the sensors required to do this—at a minimum there will be a gyroscope, but more advanced models will have an accelerometer as well. There may also be a barometer, GPS, compass and distance sensors (based on ultrasonic pulses or lidar). But we need something to process all of these inputs, so at the heart of the flight controller is a CPU. Since battery life comes at a premium these were traditionally 8-bit Arduino-based (or closed equivalents) chips that didn’t really offer much in the way of expandability. One of the earliest examples is the ArduPilot Mega (APM) which first appeared in 2007 and featured an 8MHz CPU with 8KB RAM and 256KB flash storage. APM hardware is paired with their Mission Planner software which provides everything one could want in a ground control station including the ability to plot courses using Google Maps. This is changing now and we are seeing much more sophisticated controllers, capable of handling more sensors and doing more advanced navigation. One such is 3D Robotics’s Pixhawk, which is a powered by a 32-bit ARM Cortex M4 and runs the NuttX real-time OS. More capable single-board computers, such as the Raspberry Pi, have become ubiquitous and have also found their way into the sky. The vanilla Linux kernel was never intended for real-time applications. However, demand for this has grown, in fields as diverse as audio production and industrial CNC-cutting. As a result a widely-used patch, PREEMPT_RT, is available which reworks great swathes of kernel code so that it becomes a fully pre-emptible, low-latency powerhouse. The APM firmware that previously was only available for Arduino devices has been successfully ported to Linux. Thus our favourite OS can now be used to pilot an aircraft.

This is the approach used by the Erle-Brain 2 autopilot, which comprises a Raspberry Pi 2 coupled with a PixHawk Fire Cape 2.0 (PXF 2.0) board, all neatly packaged in a vibration-damped and compact form factor. The PXF board is an entirely open hardware design and provides all the required sensors as well as exposing connections to the I2C bus in case you want to add more. It can control up to 12 outputs (motors/rotors/ rudders/cameras) via PWM and has 4 LEDs for displaying rudimentary diagnostic information. The manufacturers provide a Debian-based image (with RT patches) as well as a more modern affair based on Ubuntu Snappy Core. Snappy is a lightweight OS designed for IoT devices and cloud installations. It differs from the traditional, package-based Linux update methodology through its use of transactional updates, in which the base system is updated as a single unit. If this update goes wrong, it’s simple to roll-back to the previous image. Snappy also supports apps (‘snaps’), which can be downloaded from the Erle Robotics store (http:// erlerobotics. com/blog/ snappy-store). The Erle-Brain can be purchased separately or as part of the company’s drone kits. The PXF 2.0 board isn’t available individually, but those seeking a more DIY approach might be interested in Erle’s latest offering—the PXFmini. This is a shield for newer Raspberry Pis, particularly the Zero, which squeezes all the required sensors and connectors onto a 15g, fully APM-compatible board. The Navio2 Autopilot Shield does a similar job, but also includes a GPS module (which also works with the Russian Glonass and Chinese BeiDou networks) and a power module.

Make a streaming stick see page 80

“To remain airborne it must be able to respond terribly quickly to changes in flight conditions.”

Inside the Ubuntu car Self-driving cars are already being tested on the roads and while current legislation forbids them from leaving the garage without a ‘meatbag’ supervisor, it may be within the lifespan of some of our younger readers that this requirement is lifted. The technology is already mind-bogglingly advanced, involving lidar, omnidirectional cameras, GPU-based neural networks, the cloud and eldritch magic. That said, the current goal is for cars to operate independently in simple highway situations, rather than negotiating unmarked crossroads and penguin crossings in some suburban backwater town. The efforts dominating the press come via the deep pockets of Google, Tesla and latterly Nvidia. However just before Christmas 2015 a surprise announcement challenged this. It came from one George Hotz, who in a former life was geohot—jailbreaker of the original iPhone and cracker of the Playstation 3. Hotz announced that he had modified his Honda Acura with a homebrew driver-assisted autopilot. Furthermore, at the heart of all this wizardry was

our beloved Ubuntu Linux. Hotz, through his company comma.ai, hopes to market the system to the automobile industry for around $1,000. In so doing, he hopes that the current monopoly enjoyed by Mobileye (who are currently partnered with many car manufacturers and provide of Tesla’s autopilot system) will be broken. While the system is not pretty, with cables running all around the cars interior, an array of cameras, a glovebox full of gubbins, and that purple monstrosity that is the default Ubuntu theme, Hotz claims that it’s hugely advanced: Everything it knows about driving came from artificial learning, rather than being force-fed crude and approximate rules from a human.

www.techradar.com/pro

Google’s robo-car fleet also use Linux. Other vehicles and peds appear as boxes--so far the vehicles have not taken it upon themselves to target said boxes.

April 2016 LXF209 37


Embed Linux

Arch Wi-Fi hotspot

Get a feel for embedded Linux by creating your own Pi-powered hotspot.

T

o conclude this feature we’ll walk you through creating your own faux-embedded Linux device: namely a Raspberry Pi wireless hotspot. The Pi will need a wired connection to your router; an ideal use case here is to connect Pi and router over a Powerline connection. That way we bring a wireless signal to the general vicinity of the Powerline adaptor, where perhaps your router’s wireless signal doesn’t extend. We’re going to further assume that your router provides DNS and DHCP services to your local network, which it almost certainly does if you’re in the UK or if you already have multiple devices connected to it. People connecting by a cable modem are out of luck here, since we’re going to use a simple network bridge which makes use of these services with the minimum of fuss. You’ll also need a wireless adaptor that’s capable of going into Access Point (AP) mode— one that works as normal (ie can connect to your home network) with the Pi may not be enough. You can check this by plugging the adaptor into a Linux system and running: $ iw list

As long as AP appears in the list of supported interface modes then all will be well. If not then consider investing in a new adaptor. One with a large antenna will vastly improve your access point’s range, and you can find some info on compatibility at http://elinux.org/RPI-Wireless-Hotspot or by web searching model numbers. We’d strongly advise against the Realtek 8188EU or 8188CU chips (used in many budget wireless adaptors). It’s possible to get these into AP mode, but requires you to build an out-of-kernel driver and use Realtek’s corresponding hacked version of hostapd. See http://bit.ly/RTL8188AccessPointOnPi for details. Even if you don’t have a Raspberry Pi, then you can still play along—the hostapd software we’ll use is available in all the repositories (repos), so a PC running any flavour of Linux will do, although the steps to set up the network will differ. To mix things up, we’ll be using the ARM port of Arch Linux for the Pi. Raspbian tends to get a lot of attention but it’s always nice to see what else is out there. In the spirit of embedded systems, the initial install provides the bare minimum required to do anything. As is also the case for embedded systems, installation is non-trivial (see the instructions in the Setting up Hostapd box, below, and follow them carefully).

Arch on Raspberry Pi

Why not add a display (such as the fetching Display-O-Tron 3000) to your router to display the number of connected hosts, network conditions.

You’ll need an SD card that you don’t mind losing all the data off, a 2GB one will suffice. We’ll prepare the medium from another Linux machine using standard tools. Insert the SD card and find its device name, it will be something like /dev/ sdc or /dev/mmcblk0 – we’ll refer to it by /dev/sdX. Make sure you have the right device, it’s entirely possible to nix your hard drive by getting this wrong. All of these commands must be executed as root, so use su or sudo -i to boost your privilege. Now start fdisk: # fdisk /dev/sdX Type p and push enter to list partitions. If this looks like your hard drive (the size column will give you some hints) then stop now. Otherwise type o to erase all the partitions. Now type n to create a new partition, type p to choose a primary partition, type 1 to make this the first partition (our boot partition), press Enter to accept the default first sector (usually 2,048) and then type +100M to make this partition 100MB. Finally press w to write the table and exit fdisk. Now we create and mount the filesystems on the card: # mkfx.vfat /dev/sdX1

Setting up hostapd Setting up the access point is refreshingly simple. Edit /etc/ hostapd/hostapd.conf and just populate the file with: interface=wlan0 driver=nl80211 bridge=br0

38 LXF209 April 2016

ssid=LXF Wireless hw_mode=g channel=11 auth_algs=3 wpa=2 wpa_passphrase=acupoftea wpa_key_mgmt=WPA-PSK

wpa_pairwise=TKIP rsn_pairwise=CCMP wmm_enabled=1 ieee80211n=1 This will enable you to set up a WPA2 hotspot and then you can go

www.linuxformat.com

on to pair the hotspot with our wired interface via the bridge. Adjust the ssid , channel and wpa_passphrase as you see fit. The last two lines enable higher speed 802.11n connections (hardware permitting).


Embed Linux

# mkfs.ext4 /dev/sdX2 # mkdir /mnt/sdboot # mkdir /mnt/sdroot # mount /dev/sdX1 /mnt/sdboot # mount /dev/sdX2 /mnt/sdroot Now we need to fetch the OS image. There are different versions for the Pi and Pi 2, and if you have the former you are in luck: the image is on the LXFDVD. And you can copy it to the current directory as follows: # cp /run/media/LXFDVD209/ArchHotspot/ArchLinuxARMrpi-latest.tar.gz . If you have a Pi 2, worry not – the image is only a 300MB download and can be acquired like so: # wget http://archlinuxalarm.org/os/ArchLinuxARM-rpi2latest.tar.gz Now we decompress the root filesystem onto the SD card, remember to insert a strategic 2 if you have a Pi 2: # bsdtar -xpf ArchLinuxARM-rpi-latest.tar.gz -C /mnt/sdroot # sync The last command ensures that any cached data is written out to the SD card to avoid any tears later. Then we need to move the relevant files from the root to the boot partitions: # mv /mnt/sdroot/boot/* /mnt/sdboot Lastly we should unmount the SD card partitions and remove the temporary mountpoints we created: # umount /mnt/sdboot /mnt/sdroot # rm -rf /mnt/sdboot /mnt/sdroot Now that Arch Linux is installed we can go ahead and test it on our Pi. Having a spare monitor and keyboard is required at this point: unlike Raspbian the Pi will not connect to the network by default. Insert your newly minted Arch SD card into the Pi and connect the Ethernet cable and wireless adaptor. Booting the Pi should show you a rainbow coloured pattern for a few seconds and then eventually Systemd should kick in and present you with a login prompt. Log in as root with password root; it’s probably a good idea to change this now with the passwd command. You should be able to get a quick and dirty wired connection with: # dhcpcd eth0 Now we should update the system with: # pacman -Syu and then install the needed packages: # pacman -S hostapd bridge-utils iw The base installation is set to obtain an IP address via DHCP over the wired connection as soon as it’s plugged, but this isn’t ideal for our scenario—we want it to have a fixed IP address so that we can easily ssh in when things go wrong. We’ll use the netctl program to make a profile for a simple network bridge connecting the wired and wireless interfaces on the Pi. This arrangement is the easiest to conceptualise, since all devices that connect wirelessly to the Pi end up on the same subnet as the devices that are connected directly to your router. Hence no need to mess with iptables, packet forwarding or DNS masquerading. First run # ip a to find the names of your wired and wireless interfaces. We’ll assume they’re called eth0 and wlan0 respectively, be sure to substitute if yours are named differently. To create and edit the profile, run

# nano /etc/netctl/bridge and then populate the file with the following settings and then exit with Ctrl-x, saving the file on your way out: Description="LXF Bridge connection” Interface=br0 Connection=bridge BindsToInterfaces=(eth0) IP=static Address='192.168.0.100/24’ Gateway='192.168.0.254’ Here we’ve assumed that your router issues IP addresses of the form 192.168.0.xx, the output from ip a will tell you if this is the case. If it’s not then use different numbers in the file, it’s wise to configure your router to avoid assigning the address you give your Raspberry Pi (here 192.168.0.100) via DHCP, but doing this is beyond the scope of this article. (Follow the instructions in the Setting up Hostapd box to get this up and running). Before we can check that it works, we need to enable our

“Your own faux-embedded Linux device: namely a Raspberry Pi wireless hotspot.” network bridge and the hostapd service so that they start automatically: # systemctl enable netctl@bridge hostapd Now reboot and, fingers crossed, you should be able to see the ‘LXF Wireless’ hot spot from any device with a wireless card. Furthermore, you should be able to connect to it using the password acupoftea . If there are problems try running hostapd in debug mode: # hostapd -dd /etc/hostapd/hostapd.conf This project could easily be expanded in a number of directions—the privacy conscious may want to bake in Tor or VPN support, customers unhappy with their internet service provider may want to autotweet them whenever their link speed drops (see AlekseyP’s code at http://pastebin.com/ WMEh802V) or you could add a hard drive so that it can serve files. Do what you will. LXF

You can use the pacman package manager to keep your Arch Linux and Raspberry Pi-powered Wi-Fi hotspot up to date.

www.techradar.com/pro

April 2016 LXF209 39


Jonni Bidwell learns all about Mozilla’s recently released language with veteran Rustafarian Jim Blandy.

Diamonds and Rust

40 LXF209 April 2016

www.linuxformat.com


Jim Blandy

Jim Blandy (aka jimb) cut his teeth working on Emacs, The GNU Project Debugger (gdb) and various other bits of the GNU Project. He’s a founder of Red Bean, a company set up to never to make any money, but also to never go away. These days he’s a software developer at Mozilla, and an ardent Rust proponent. We caught up with him at OSCON to learn all about the new language, as well as the idiosyncrasies of the old ones. Plus the various challenges of teaching a new generation of rustaceans.

Interview

Linux Format: So Rust 1.0 (this interview took place in July 2015) has just been released. I have a sort of pointed question to begin with: All these new program languages from major players – Go, Swift, Rust – why do we need them? Jim Blandy: That’s a great question. Especially when you’re learning programming languages and once you get enough experience you realise that, for the most part, they’re all pretty much the same. Then once you’ve got the gist of one you can pick up another pretty quickly. That’s why languages like Haskell are a real joy because they’re definitely not easy to learn. Prolog’s a good one too, because you can run your programs backwards. LXF: Even as a mathematician I don’t really believe functional programming works. I still don’t understand monads. JB: They’re just monoids in the category of endofunctors. LXF: Thanks, that really clears that up. Let’s go back to Rust. JB: The defining characteristics of C and C++ are their attitude towards undefined behaviour. In a language like Python, if you reference an element off the end of an array, it throws an exception. That exception is described in the documentation, it says that that’s what happens. So even when you do bad things, the language specifies what the response is. JavaScript is much the same. In a sense, those languages try to be total: every program that you could possibly give them has some meaning – it might not be useful, it might just be throwing an error – but everything has meaning. In C and C++, what they say is “Well

there’s some errors that we can detect efficiently or at compile time and we’ll tell you about those. But basically everything that would cost us even the smallest amount of overhead to detect is up to you to avoid.” In fact, if your program does any of these undefined things the compiler is within its rights to produce a program that does anything at all. So the demo that I opened up my talk with is a simple threeline C program that declares an array and one element, assigns a value to its third element (which it doesn’t have) and it returns. When you run this it displays a weird error message saying that your password is exposed and things like that. What’s happened is that the program has overwritten the return address for main() so when main() returns it dumps into some poor little C library and it just falls apart.

in, the experiment has been run, humans can’t write that code, they can’t be trusted. The Google security blog had a post recently about some integer-size based vulnerabilities. Even the innocent things like ‘Well, I’m just gonna cast this 32-bit integer into a 16-bit integer, I don’t wanna waste time just take the bottom 16 bits, I know what I’m doing’. You don’t. Or rather, you might, but the frequency with which you don’t is often enough that we power the Russian mafia. It’s bad. So what we’ve got is the situation where the systems programming languages can be trusted with low-level stuff, kernels, crypto and implementing VMs for other languages. All those systems languages are unsafe, it’s up to you to do this thing that humans can’t do.

ON 30 YEARS OF BUFFER OVERFLOWS

“The jury is in, the experiment has been run, humans can’t write that code…”

LXF: And that’s not due to any bug in the compiler or anything? JB: Nope. This is completely legitimate behaviour according to C and C++; basically their attitude is that it’s the programmer’s responsibility to avoid undefined behaviour. We now have 30 years of history testifying how well that works out. In 1988, the Morris virus exploited a buffer overrun to break into people’s computers through the finger protocol. Since then, there’s been a steady flow of those kinds of exploits. If you look at the open-source vulnerability database they have a little chart up there and it’s a consistent 10%. These days we have SQL injections and PHP, so there’s a lot of competition, but they just keep coming and it’s not a surprise: The jury is

www.techradar.com/pro

Basically every other language is welldefined: Python, JavaScript, Haskell, Ruby and everything else tries to be complete. So there’s this weird dichotomy where the languages we trust with all kinds of untrusted data are the ones where it’s a sword dance on an ice skating rink. Rust exists to bridge that chasm. It’s a systems programming language where it tells you when you break the rules. In Rust, when I declare a structure with two 32-bit integers, that is a 64-bit value with nothing else. It’s just those two words, there’s no metadata or dynamic anything, it’s just a simple data structure. We have implemented a garbage collector for

April 2016 LXF209 41


Jim Blandy

Servo (a project to port Firefox to Rust), but that’s not part of the language itself. When you write a Rust program you know exactly when each value gets freed, you don’t have to wait for a garbage collector to know when it’s gone. So the storage management is very deterministic and easy to control, data representations are simple and direct – they’re just exactly what the machine needs to do to represent those values, and operations that look cheap in the code are cheap. In C++ when you do an assignment, if that assignment happens to be a vector, that’s copying that vector over to the destination. And if that vector happens to be a vector of strings, it’s copying each string. So you can end up accidentally writing code that is incredibly inefficient, it’s allocating vast amounts of memory. This isn’t usually a problem, but it’s not a characteristic you’d like in a systems programming language whose whole selling point is that it gives you control over the machine. LXF: What is it that Rust does differently? JB: Rust takes a different approach to those things. It uses moves for big expensive values: assignment will move a value from the source

42 LXF209 April 2016

to the destination and then leave the source de-initialised. The consequence of that is when you have a big structure like a vector or a hash table, at any given time that value has exactly one owner. You can pass it to a function and then maybe the function takes ownership of it, but you the caller have lost access to it – you moved the owner, but there is only ever one. You could take that big value and store it in another table, now that other table owns it and again you’ve lost access to it. Having only one owner makes it very clear when that value is going to go away. That’s basically Rust’s storage management story. Of course, it’s very restrictive to have only one owner, there’s a reason why people write ownership-ambiguous programs the way they do. So Rust has a thing called borrowed pointers, which means that you’re using something for a little while, but you’re going to give it back to its owner eventually. A borrowed pointer gives you access to a value without changing its ownership. You can compute on it or modify it, but you have to give up your borrowed references to it in a way that the compiler can tell that you’re doing it. The compiler has to be able to see that all of your borrows end at a welldefined time. So there’s two kinds of borrows, you can have shared references, you can have millions of those ones, you can hand them out to everyone so long as they all come back—they all have to expire. Or you can have a mutable borrow, you’re only allowed one of these, it’s a multiple reader single writer pattern. If you have a mutable borrow then that’s the only thing that can access that item at all, you can’t even call it by its original variable name. The mutable borrow is now the sole point of access to the underlying object. By strictly segregating access that can be shared from access that can mutate, Rust can actually prove at compile time that you never have share and mutation at the same time. If you read the Java spec and look at the hash table interface, there is a rule that says if you are

www.linuxformat.com

iterating over a hash table, the only thing that’s allowed to modify that hash table is that iterator. If somebody else modifies that hash table your iterator will throw an exception. In classic C++ style their response to that situation is if you ever modify this hash table while you’ve got iterators on it all of those iterators are invalid, it’s undefined behaviour – it just throws it back in your face and you’re responsible for maintaining this whole program invariant, which as I’ve said before you can’t be trusted with. So at least Java throws an exception, and other languages recognise this too. Rust goes one better than Java, where it will tell you at compile time that it’s possible that this situation could arise. So we’re throwing away a lot of programs that you could write in other languages, and some of those programs will be fine and correct. But the nature of any static analysis, any analysis that happens at compile time and doesn’t have the running program to look at, is that if it permits all correct programs, if it allows you to write all the programs that are actually OK, then it must also permit some unsafe programs. You can’t exactly match the boundary between the OK programs and the not-OK programs – it’s not computable. So the Rust analysis is conservative, it rejects correct programs and it always will, but it turns out that it’s not that bad. Once you get used to it and the way it’s seeing the world, it’s actually entirely comfortable and it doesn’t really forbid you from writing much at all. LXF: I guess there’s an analogy here with Gödel’s Incompleteness theorem here, in that you can have completeness or consistency, but not both. JB: That is exactly what it is. But the borrow checker is something that will improve over time, any time that we can see a way to improve that analysis so that it will allow more correct programs then we’ll do that, but we have to be sure that it’s sound. The experience of working in Rust is freaking amazing, though. Once you get past that, I should say learning curve but it’s more like a period of suffering, right, maybe purgatory is a better word. Anyway, you have to climb that mountain for a few weeks, but once you get there the view is good. A friend of mine came to me showed me this algorithm to insert a value into a binary tree, he said ‘I haven’t checked it, but it doesn’t crash’. And you see this code, it’s finding values and replacing nodes and walking down and things like that. So in C or C++ you’d be thinking: ‘There’s gotta be a dangling pointer here somewhere’, you’d have to really read it thoroughly to check. LXF: What about multithreaded programs? JB: There it’s really exciting. Matthias Felleisen – who is a professor at Northeastern University,


Jim Blandy

he’s one of the big people in the PLT group that produced Racket, DrScheme and a bunch of other things, like a really cool functional-reactive system called Father Time. His other big focus is computer science pedagogy and he’s done a lot of work in how to actually teach Computer Science to people so that they get it. He was in a panel just two weeks ago with Gilad Bracha which was about ‘What should we be studying, where’s computer science going?’. There was a lot of dumping on types. Anyway Matthias told this story about a class that he taught: He took a whole bunch of students that had never written parallel code before and he taught them how to do just that in Rust. Matthias is a serious iconoclast, he’s so mean, so he’ll put down anything that he doesn’t think is good—he’s sincere, but he’s brutal. He said that people found it difficult for the first two weeks because progress was slow and students found the error message confusing. But after those two weeks they didn’t have any problems, their programs compiled and ran and did exactly what they were supposed to do. That I find is really exciting. The idea that you can use concurrency as a technique of first resort instead of a technique of last resort. Which is the polar opposite of how we do things now, where you optimise your single-threaded code to within an inch of its life and then, when you can’t squeeze out another second, then turn to concurrency.

JB: It’s tricky to say, my sort of workflow looks like this. So I start a Rust program, it looks really simple and clean but it doesn’t compile. Then I argue with the compiler for fifteen minutes or so and it goes through a process of being really hairy and code starts to look awkward. Then I realise that I’ve rearranged things so I start to LXF: Quite often people like to divide take some of this complexity out and picking programming languages into fast languages the hair out, and then I’m done and it’s actually and slow ones. The fast ones are C, C++ and beautiful. So the end result often looks like a Java, and the slow ones seem to be nice Python program. It’s like I just made a ON RUST’S THREAD-SAFETY dictionary and then stuck some stuff in a dictionary and then I iterated over it and got it back out. It does type inference so you only have to use types everything else. On which side does Rust live? at function boundaries, you don’t have to use them inside functions, generally, it just figures JB: Rust is a fast language. Obviously we’re them out. building on all the amazing work that the Clang So in that sense I think it’d be great for people have done. Clang itself is a great frontpeople coming from the dynamic languages. end, and any C++ front-end that’s usable is an At the same time I would worry about that accomplishment, but Clang is especially good intermediate step where everything’s in pieces and it is supported by the optimisation on the floor. Getting through that I’m using infrastructure that LLVM has. Not only that, the everything that I know, and that might be a LLVM people are just fanatics about software sizeable obstacle for the less experienced. The architecture. It would be very easy to have a difficulty of learning the language might turn back-end that’s specific to your front-end, I out to be Rust’s biggest weakness. For all the think that’s probably a common case, but LLVM C++ meta-template programmers out there is very nicely isolated, which means that and the Haskell hackers I think it’ll be no languages like Rust benefit from all that work. problem, but that’s a very small population. I am working on how best to explain how LXF: How hard would it be for an amateur Rust works. In the presentation yesterday there programmer, say someone familiar with were some parts that didn’t go so well, but the things like Python and PHP, to pick up Rust?

“You can use concurrency as a technique of first resort instead of last resort”

www.techradar.com/pro

parts that did go well were the hardest parts. In particular the ownership moving and borrowing step, I think I found a good way to explain that. There will be a book coming out with O’Reilly at the end of this year, so if all goes well, hopefully it will have solid explanations. What Rust shares with Python is safety, when you’re writing a Python program you don’t get weird corruption where the system starts to behave in strange ways you can’t understand, at worst you get an exception. It’s very friendly, it just tells you what went wrong. And Rust shares that quality, it tells you what’s going on—you don’t end up in Tombolia. That phrase comes from Gödel, Escher, Bach, where he says ‘You’ve violated the rules and suddenly you’re in Tombolia and you have no idea what anything means anymore’. So there’s no Tombolia in Rust. That I think will be very welcoming to people. Thinking about types and writing them out, some people already think that way and they’ll be fine. Some people never think that way, I don’t know how they program, but they’re going to find it difficult. The great thing is that although it’s a lowlevel, close to the metal language, you don’t end up worrying about the bits and bytes. It’s not like you think, ‘Oh, I’ve overflowed and now my size is wrong and I’ve crashed the program’. You get exceptions when you convert a 64-bit value to a 32-bit value and it doesn’t fit. So in some circumstances it will be welcoming, and in other senses there will be serious challenges. I don’t want to say something’s hard, as a flat statement, because it’s about how everything’s taught, so we just have to wait for there to be good teachers, that’s what I want to do. LXF

April 2016 LXF209 43


BBC micro::bit

The micro:bit board measures 52mm by 42mm and packs a lot of features, such as accelerometer, compass and Bluetooth LE. At the bottom edge of the board are a series of GPIO pins.

BBC micro:bit Back to the future of computing! Les Pounder gets hands on with the new BBC micro:bit, a device aimed at the next generation of makers. here seems to be an influx of devices that claim that they will all change the future of computing education. From the Arduino to the Raspberry Pi, we now have a plethora of choices for getting started with physical computing, but in late 2014 there was a rumour that the BBC (British Broadcasting Corporation), were keen to emulate the success of the UK’s 1980s coding scene, which was led by the BBC Micro, its own micro computer. In 2011, there were a number of reports from Education advisers and Members of Parliament that the UK was now falling behind in computer science and that many children believed that taking up relevant roles meant relocating to another

T

country. This was particularly prevalent in the gaming industry, where the UK maintains sixth position, but with a decline in the number of developers originating from the UK. So the education sector stood up and listened, when the BBC announced in 2015 that it would be partnering with a number of

computer/micro-controller, rather the goal is to disrupt by putting a device into the hands of children and teachers that has zero cost but maximum impact. The disruption also moves into our view of the connected world, with the Internet of Things, IoT. The micro:bit is designed to work with mobile devices to spur on classroom creativity. With the micro:bit anyone can make their own smart device, and with very little code. The micro:bit also has projects and documentation that’s been designed to fit into the UK’s computing curriculum. The hope of all the project partners is to rekindle the successes of the 1980s and help children to learn how rewarding computer science can be, with aims to generating new job roles in the future.

“Puts a device into the hands of children and teachers that has zero cost but maximum impact.”

44 LXF209 April 2016

hardware, software and service suppliers to deliver a single board micro-controller powered platform. The goal of the micro:bit project isn’t to introduce just another single-board

www.linuxformat.com


BBC micro:bit

Micro Python: Get started Set up a light show with your micro:bit.

F

or this project you will need to connect your micro:bit to a Linux machine or Raspberry Pi. You’ll also need an LED, 220 Ohm Resistor (RED-REDBROWN-GOLD) and three Crocodile clips. In physical computing, the equivalent to programming’s “Hello World” is controlling an Light Emitting Diode (LED). This helps to test that the board and components are working correctly before we progress to something more challenging. First, We start by downloading the Python software known as Mu, http://bit.ly/ LXF209-Microbit-Software. Ensure that you have the latest version of the software for your OS. You will need to make the downloaded file executable, in most Linux distros you can right click on the file and select Properties and from there make the file executable. If you prefer the terminal then you can do the following: $ chmod +x <insert name of file> Now open the Mu application by doubleclicking on the downloaded file. The Mu editor looks basic, but is constantly being worked on by members of the Python Software Foundation. You can see a row of buttons across the editor, but the ones to pay particular attention to are Flash and Repl. Flash is used to Flash your code on to an attached micro:bit, and Repl (Read Eval Print Loop) is used to interactively hack with the

micro:bit. We shall start our project by writing a few lines of code that will flash an LED on and off with a half second gap between each state. In the top windows we import the entire micro:bit Python library: from microbit import * . Now we create an infinite loop, which will contain the code that we wish to run while True: . Inside of the loop, the next line of code is indented, as per Python’s requirement to show that this code is inside of the loop. First we change the state of pin 0, which is currently turned off. To turn on the pin we set it to be 1. Then we set it to sleep for half a second, before turning pin 0 off, using 0, then it sleeps for half a second to create a seamless loop. You’ll notice that we didn’t import the time library, but we’re still managing to use the sleep function. This is because Micro Python has its own sleep function inside of the micro:bit library that uses milliseconds for duration, with 500 equalling half a second. pin0.write_digital(1) sleep(500) pin0.write_digital(0) sleep(500) With the code written, it’s time to flash the code on to the attached micro:bit. Click on ‘Flash’ and wait until the yellow LED on the reverse of the micro:bit stops flashing. With the code loaded on to the micro:bit, we

Using the GPIO connectors supplied on the micro:bit and some crocodile clips we can quickly build a circuit to test our board.

connect the components. Attach one side of a crocodile clip to pin 0 and the other to the long leg of an LED. Connect another crocodile clip to the ground (GND) of the micro:bit and then attach the other end to one leg of a resistor. On the other resistor leg attach another crocodile clip and then attach it to the short leg of the LED. You should now see the LED flash. If not, then check your wiring is correct by removing the crocodile clip from pin 0 and attach it to 3V. If the LED lights up

Anatomy of the micro:bit Bluetooth smart antenna

5cm

Battery connector 4cm

Micro-USB connector

FRONT 3 Digital/analogue input/output rings

Power 25 individually programmable LEDs

Ground

BACK

2 Programmable buttons

20-pin edge connector

Accelerometer and compass

www.techradar.com/pro

32-bit ARM Cortex M0CPU with Bluetooth Low Energy

April 2016 LXF209 45


BBC micro:bit

Fun with the Accelerometer Shake, shake, shake – shake your micro:bit.

F

or this project, you’ll only need a micro:bit, which comes with an accelerometer that’s commonly used in mobile devices to determine the orientation of the device and screen rotation. In this tutorial we’ll be using the micro:bit as an input device that reacts to gestures. We begin in the Mu editor and as always our first line imports the micro:bit library: from microbit import * . Now we use an infinite loop to contain the code that we wish to run, like this: while True: . The accelerometer embedded in the micro:bit has its own series of functions that can be used to query the position of the board in space. We can detect the full x, y, z co-ordinates of the board for fine control, but there are times when we don’t need such precision and that’s where gestures provide a quick solution. Gestures are motions that have been predetermined, eg doing things like shaking, tilting and flipping the micro:bit. We can use these gestures for simple input

Mu is currently the only offline editor available for the micro:bit.

The micro:bit can detect predefined gestures with its accelerometer and report them using the Python library.

and in this project we’re going to use a conditional statement that will check to see which gesture has been made and react accordingly. The first test is to see if the accelerometer has been tilted upwards. The output from this test is either True or False, but if True then the code (indented below) is activated. Clearing the screen before scrolling text across the micro:bit LED matrix. Last, there’s a 0.1 second pause before the code checks again. if accelerometer.was_gesture(‘up’): display.clear() display.scroll(“Dogs cannot look up”) sleep(100) Our next test is called Else…If, which is abbreviated in Python to elif . In this test we check to see if the micro:bit has been pointed towards the floor. elif accelerometer.was_gesture(‘down’): display.clear()

display.scroll(“I feel sick”) sleep(100) We repeat this process for two more tests that will cover tipping the micro:bit left and right, the syntax for this code is identical to the down gesture, but refers to ‘left’ and ‘right’ . Our last gesture is a shaking motion, which when detected will trigger the final section of code to be activated: elif accelerometer.was_gesture(‘shake’): display.clear() display.scroll(“Stop shaking me!”) sleep(100) With the code now complete, save your work and then click on Flash to flash the code on to the attached micro:bit. After the yellow LED on the reverse of the micro:bit stops flashing you are ready to use the controller. Start by tipping the micro:bit forwards, backwards and side to side. Last, shake the micro:bit to test the shake gesture.

Micro:bit partners The micro:bit hardware specification has been driven by the needs of the target audience: the children, and by supplied partners, such as ARM, Freescale and Nordic Semiconductor who are collectively responsible for the CPU, accelerometer, magnetometer and Bluetooth LE. The input of these partners has changed the board from a simple micro-controller into a platform for advanced experimentation. In one package we can build a wireless controller for a camera or robot and then the next day create a scrolling

46 LXF209 April 2016

name badge. The single limitation of the micro:bit is that it only provides access to five GPIO pins. But this can be overcome with an add-on board from Kitronix, another partner in the micro:bit project, which breaks out the full GPIO for use in projects. More information on this and a range of products can be found at the official site: http://bit.ly/ MicroBitAccessories. The overall design of the micro:bit has been the responsibility of Technology Will Save Us, who have worked with children to create a board

www.linuxformat.com

that’s a suitable dimension for small hands and large connectors. Other partners in the project include Samsung who created a mobile application that can program the micro:bit via a Bluetooth connection. This app comes with three projects to test, and the camera selfie project is rather fun, and useful as a remote camera trigger for possible wildlife projects. A full list of partners is here: http://bit.ly/MicroBitPartners.


BBC micro:bit

Minecraft gesture controller Send Steve flying with your micro:bit.

F

or this project, you’ll need a micro:bit connected to a Raspberry Pi, the latest Raspbian operating system and Mu software on the Raspberry Pi (http://bit.ly/LXF209-Microbit-Software). Building upon our previous tutorial, we’ll use the micro:bit as a controller for Minecraft on a Raspberry Pi. There are two parts to this tutorial: code for the micro:bit and code on the Raspberry Pi. Let’s start with coding the micro:bit by opening the Mu application to begin. As always, we start by importing the micro:bit library from microbit import * . We’re now going to use an infinite loop to check the gesture made be the user. while True: if accelerometer.was_gesture(‘shake’): If the correct gesture is given, then the code will print ‘shake’ to the shell, clear the LED matrix on the micro:bit. It will then scroll Teleport across the display before resetting the micro:bit ready for the next gesture. print("shake") display.clear() display.scroll("Teleport") sleep(100) accelerometer.reset_gestures() sleep(100) Save your work and then flash the code to the attached micro:bit. We now switch to Python 3, found in the Programming menu. Click on File > New Window and in the new window click on File > Save and save the file as mb-red-gestures.py. We start the code by importing the Minecraft, time, and serial libraries. The serial library will be used to communicate with the micro:bit. We also import the randint function from the random library.

The new micro:bit and the Raspberry Pi work together very well.

import serial, time from mcpi.minecraft import Minecraft from random import randint Next, we create three variables, port , baud and mc . These will be used to store the port the micro:bit is connected to; the speed/baud to which we need to communicate; and to shorten the Minecraft create function to enable a connection to Minecraft. port = "/dev/ttyACM0" baud = 115200 mc = Minecraft.create() We now use an infinite loop to contain our code. In the loop, we use the Serial library to connect to the micro:bit port, set the baudrate, parity, databits and stopbits before we attempt to read the data being sent over the serial connection: s = serial.Serial(port) s.baudrate = baud s.parity = serial.PARITY_NONE s.databits = serial.EIGHTBITS s.stopbits = serial.STOPBITS_ONE With the data read and stored to a variable called data , we now use a 0.1 delay to slow the code before we convert the data into a

string. Next we use the Minecraft library to get the players current position. data = s.readline() time.sleep(0.1) data = str(data) x,y,z = mc.player.getPos() Our last section of code looks to see if the word "shake" is in the data variable, used to contain the data sent over the USB serial connection from our micro:bit. If the word is found then a message is posted to the Minecraft chat window before teleporting the player a random distance up in the air and across the map: if "shake" in data: mc.postToChat("Teleport") mc.player.setTilePos(x+(randint(50,50)),y+(randint(1,50)),z) Save your code. Open the Minecraft application and open a world. Once loaded go back to Python 3 and click on Run > Run Module. Return to Minecraft and now shake the micro:bit. It might take a few attempts, but you will see Teleport on you screen and micro:bit before your player is teleported across the map! LXF

Micro Python In late 2013 a crowd-funded project, created by Damien George, saw a new leaner version of Python 3 written exclusively for microcontrollers and a supporting board called the PyBoard. Micro Python has become quite a powerful platform, offering developers savvy with Python the opportunity to get to grips with physical computing, including the popular ESP8266 WIFI board which will receive Micro Python in Spring 2016. Using Micro Python on the PyBoard requires no extra software, and the board appears as a

USB drive. Code can be written in any editor and saved to the board and run on reboot. For the micro:bit project, the Python Software Foundation (PSF) were approached to create an implementation of Python for the micro:bit. After a series of discussions between Nicholas Tollervey, representing the PSF, and the BBC, a decision was made that enabled PSF community members including Damien George, to have early access to the board with a view to creating the implementation. After many months of work by the Python community, we

www.techradar.com/pro

now have a strong Micro Python implementation that’s constantly being improved and worked on by the community. One of the crowning achievements of this project is the Mu application that we’ve used here. Mu is a possible replacement to the ageing IDLE Python editor that’s bundled with the Python language, but not really meeting the needs of its end user.

April 2016 LXF209 47


Mr Brown’s Administeria

Mr Brown’s Jolyon Brown

When not consulting on Linux/DevOps, Jolyon spends his time bootstrapping a startup. His biggest ambition is to find a reason to use Emacs.

Administeria Esoteric system administration goodness from the impenetrable bowels of the server room.

Replaced by AI?

Y

ou might have noticed that AI (artificial intelligence) has gradually becoming one of the more visible undercurrents in the news over the last couple of years. We’ve got self-driving cars working through regulatory processes before they become mainstream; political elites discussing the disruption AI and robots will bring to the world economy (while being schmoozed by the technorati at Davos) and Stephen Hawking warning of the dangers AI might bring to our world (I recommend Superintelligence by Nick Bostrom as an excellent book on this topic). Of course, enlightened Linux Format readers know that the media term AI is usually a system employing machine learning (or ‘deep learning’ which is the new phrase du jour) for a specific problem space. Actual intelligence and self awareness seem to be some way off. Thinking in more colloquial terms though – are we near the point where increased automation (including machine learning) might be able to put system admins out of a job? As a schoolboy in the ‘80s, I had to spend 15 minutes with a career advisor. In an era where the 8-bit micro reigned supreme, I informed him that my mind was made up and that I was going to be a computer programmer. ‘Why embark on such a career’, said my career advisor, ‘When in a few years machines will be programming themselves?’ While he was a few years off with that prediction, DevOps is the ideal place for machine learning to augment the capabilities of system administrators. At some point we’ve all racked our brains as to what is actually going on with a misbehaving platform. Sometimes there are too many variables – why couldn’t a machine learning system interpret these and come up with the answer? From there, it’s not too big a leap for the machine to fix issues itself. jolyon.brown@gmail.com.

48 LXF209 April 2016

AMD says “it’s time to open up the GPU” AMD announces plans to remove hurdles to innovation with the launch of GPUOpen.

C

iting the difficulties developers face when using ‘Black Box’ APIs, AMD released a variety of tools, code and documentation under an MIT licence at its new GPUOpen website (http://gpuopen.com). This is an initiative, AMD says is to “enable developers to create ground-breaking PC games, computer-generated imagery and GPU computing applications for great performance and lifelike experiences using no cost and open development tools and software”. In an introductory blog on the site, AMD’s Nick Thibieroz talked about the disparities between console and PC gaming where proprietary libraries and tool chains prevent developers from accessing code for

www.linuxformat.com

maintenance, porting and optimisation. However, the initiative is also focused on professional compute solutions where the GPU has become increasingly important. GPUOpen will be based on three principles: First, code and documentation (which has been sorely lacking in the past) which will allow developers to exert more control over the hardware. Second, AMD has committed to open source software to encourage innovation and development. Third, collaboration between the company and the wider developer community will take place, with code already appearing on GitHub (under an MIT licence). For Linux, where graphics drivers have had a mixed history of being proprietary and distributed as binary only, this should see an open source graphics driver being used as the common base for both an open source and mixed source driver. With the adoption of Vulkan (which aims to be the successor to OpenGL) this should see Linux developers being able to use an open, high performance platform for whatever use they come up with. GPUOpen also has details of the compute projects AMD has developed to go up against Nvidia’s CUDA in the HPC (high performance computation) arena. Competition in the graphics card market is fierce, with Nvidia dominating market share between the two companies. AMD clearly hopes that this strategy will help bridge that gap.


Mr Brown’s Administeria

Part 2: Rancher private containers

This month Jolyon Brown delves further into the Rancher platform, looking at stress-free upgrades via its slick interface.

L

ast month, I looked at Rancher, an open source platform for running a private container service, ending with having some RancherOS virtual machines fired up and ready to go. Such is the pace of development on this project that since last issue upgrades have been released. This does give me an excuse though to go into a bit more detail about how relatively easy upgrading (and downgrading) the OS element of a Rancher system is. I had three machines up and running at the end of last month, so I can SSH into one of them and verify the version of RancherOS I’m running. $ vagrant ssh rancher-01 $ sudo ros -v In my local setup, this returns rancherctl version v0.4.1 . This is pretty recent, but there are upgrade options available to me. I can see exactly what they are by using: $ sudo ros os list rancher/os:v0.4.0 remote rancher/os:v0.4.1 remote rancher/os:v0.4.2 remote rancher/os:v0.4.3 remote Now, if this were a regular machine/VM, upgrading from this point would just be a case of issuing a $ sudo ros os upgrade . This would give the choice (via a Y/N command prompt) of upgrading to the latest version available. A quick download later and confirmation that a reboot can take place and everything would be complete. A side effect of using Vagrant for quick and easy spinning up of examples such as this though is that it unfortunately breaks the ability to do a vagrant ssh back into the VM (if Linux Format allowed the use of emoticons there would be a sad face right there). This seems likely to be because a Vagrant machine needs to have keys injected into it during creation etc. An easy workaround for this is to run a Vagrant box upgrade – but this needs the Vagrant environment to be destroyed and recreated. Still, the ease of upgrades in a ‘real’ environment is well worth knowing about. Downgrades too, are possible by using the -i option to ros upgrade (eg $ sudo ros upgrade -i rancher/os:v0.4.1 ). This allows a quick demotion to whatever version is available. For the rest of the tutorial, I upgraded the RancherOS version listed in my Vagrantfile as =>0.4.3 and reinstalled Rancher itself. Last issue has the details for this [p48 LXF208]. Another quick note about RancherOS – it’s possible to choose different ‘consoles’ to interact with when using SSH to

connect into a VM/host running it. As well as the default Busybox-based command line, Ubuntu and Debian options are available. These are ‘persistent’ which simply means that they retain changes made across reboots, whereas the default console is more ephemeral. By using the command $ sudo ros service list it’s possible to see the options available (some lines suppressed here): disabled debian-console disabled ubuntu-console It’s very easy to switch: $ sudo ros service enable debianconsole enables that console, and a reboot brings it into effect. Personally, I think that the ephemeral approach is preferable here; these machines are ‘cattle’ after all and not strictly meant to be interacted with in that way so often. I think in a true container based infrastructure, it should be possible to cope with a host dying/rebooting and losing its config. Let the infrastructure take care of itself – scale horizontally and design your services in a way where the loss of a single node doesn’t effect the end users at all. Now, we all know that isn’t as easy as it sounds but lets see how Rancher helps us reach that goal. One final aside, I’m running Rancher on one of the nodes under its own control here. This isn’t ideal (not best practice) but is OK for the purposes of this column. The more astute of you will already be screaming about single points of failure. You’re right. Rancher can be run in an HA configuration however, but it’s not straightforward at the moment and requires use of an external (shared) MySQL instance along with some other bits of software (Redis, Zookeeper). See http://bit.ly/RancherMultiNodes for details, but I expect/ hope this will become a lot more user friendly as the project

From humble beginnings a private container empire was born. Here we can see the status of a container and even open a shell if you really feel the need.

So can I run this in Production already? No doubt there will be a few of you thinking that Rancher has the potential to solve all your container related woes and earn you much kudos to boot. But you’ve already had the conversation with your pointy haired boss in your head, you know, the one that goes: “What about support contracts?” and “When will this be a 1.0 release?” I got in contact with Rancher to find out their plans on this front and the company was kind enough to send me the following details: “Rancher Labs are planning to

release v1.0 of the Rancher platform in Q1. This will be accompanied by a commercially licensed and supported version of the product, with support offered on a Standard or Platinum basis. Response times for SLAs and hours of access to telephone based engineers will be determined by the Standard or Platinum package the customer chooses to adopt. Rancher Labs already provides a full featured, web-based ticketing system that has numerous features and capabilities. The support portal is

www.techradar.com/pro

available to companies of any support tier 24x7, providing customers with up-to-date information, along with the ability to enter cases, add information to existing cases, receive information and updates, close cases, and list their currently open and closed cases, at any time. The Rancher Labs support system also provides a constantly improving knowledge base, with 24x7 web self service.” Seems fair enough to me. You can learn more from Rancher directly at www.rancher.com.

April 2016 LXF209 49


Mr Brown’s Administeria heads towards 1.0 (see So Can I Run This In Production Already? box, p49). Of course, the whole point of Rancher is to run containers. Starting one up via the web based interface is really easy. From the infrastructure/hosts page, clicking on ‘Add container’ presents a container creation menu. From here, I can bring one up by filling in just a few options (name, description, image to use). Ubuntu 14.0.4.3 is the default container option here. There are all kinds of advanced options I can ignore for now. Once done, I’ll be taken back to the hosts screen, where the container is already firing up. Note that a ‘network agent’ container springs up as well – this is automatic, and is a system container created by Rancher to handle tasks such as cross-host networking and health checking. I can click on my new creation and get some nice simple graphs showing CPU, memory, network and storage usage. Even better, I can use the menu on the right-hand side of the screen to do some basic tasks such as restarting, stopping, deleting, viewing logs and opening a shell on it. Of course this can also be done via the command line as well. Using Vagrant to SSH to any of my hosts, I can run something like the following, which will download and start a container (and then I can confirm it’s up and running via the ps command): $ docker run -l io.rancher.container.network=true -itd ubuntu bash $ docker ps What the io.rancher.container.network=true option does here is ensure the container joins the Rancher-managed network. Heading back to my browser, I’ll be able to see this

new arrival running in the same range as my other containers (with an IP address of 10.42.X.X). Running docker inspect at the command line against the same container though will show it with the more usual 172.17.. IP address. What gives? Under Rancher’s network, containers are assigned this regular docker bridge IP and also Rancher-managed IP on the default docker0 bridge which enables them to be reachable via the managed network.

Environmental factors

Rancher has the ability to define separate environments. This concept will be familiar to most readers I would imagine – having development existing as a discrete set of resources and production as another is a desirable thing, I hope we would all agree. If you don’t, lose all accumulated Administeria points and head back to LXF1. Do not pass go and do not collect your £200. If you’re in a job where this isn’t the case, and you have actual end user data existing in this situation, I’d suggest looking for another job. But I digress. Creating environments is but a simple couple of clicks in Rancher (the drop down menu on the right hand side). Starting a new one brings me back to the ‘Adding your first host’ screen I got when bringing Rancher up for the first time. Users can be assigned to different environments in this way, which is handy and keeps those pesky security people at bay. I’m only going to use the default environment here though. At the moment I’m sure you’re thinking that this is all well and good, but a nice UI isn’t giving me anything more than a vanilla Docker setup on any old OS is at the moment. Correct. Where Rancher really comes into its own, in my opinion, is

Remember when adding a load balancer meant having to go cap in hand to a network team somewhere and waiting six weeks for it to happen?

What else do I need to know? There are a few other cool features hidden under Rancher’s hood. Everything is visible via the built-in API, which is easily interfaced from any scripting language worth its salt. This is great for hooking into all kinds of other tools (eg monitoring – we’ll cover more on that next month) and might eventually be the main way in which Rancher is communicated with. Rancher can also handle custom registries for local Docker images and it’s quite likely that most companies won’t want their in-house applications loaded up to Docker Hub

50 LXF209 April 2016

or some other service, preferring them to be safe and sound on the internal network. Its a great idea to have an automated build process that throws Docker images up into a registry for use by Rancher. When defining services, it’s possible to mark them as ‘external’ - ie not under Rancher control. There might be a database somewhere that containers under Rancher might need to speak to, or an SMTP gateway perhaps. Rancher can have a definition for these (something as simple as a name, IP address and port), allowing

www.linuxformat.com

containers to find them via the built-in service discovery that Rancher provides. Finally, there are a whole host of available applications listed in the Rancher ‘catalog’ screen, which have been specifically tweaked to run in a Rancher environment. These range from databases, shared filesystems such as Gluster, and build systems, such as Jenkins. These can be installed by a one-click mechanism - they’re basically Rancher supplied and preconfigured docker/rancher-compose files. Very handy!


Mr Brown’s Administeria

how it handles ‘stacks’. In the Rancher universe, a stack is defined as mirroring “the same concept as a docker-compose project. It represents a group of services that make up a typical application or workload”.

Stacks of containers

There are a couple of ways of interacting with these entities. The UI, of course, and also a tool called rancher-compose. This is billed as a multi-host version of docker-compose. It can be downloaded direct from the Rancher installation itself. At the bottom right-hand side of each screen is a link labelled ‘Download CLI’ (mentioned briefly last issue). Putting the resulting executable somewhere on my $PATH enables me to create configuration files (and regular readers will know how much I like version controlled files for this kind of thing) that can stand my services up in a jiffy. It’s definitely recommended to keep rancher-compose in sync with the version of Rancher itself, by the way (ie, download the new version each time you upgrade). However, before ranchercompose can be used, some authentication needs to take place. The tool speaks directly to the Rancher API, and anyone invoking it must have a relevant secret/key combo. This is easily created though. Within the environment I want to work with, I simply click on the API tab at the top of the screen, hit ‘Add API Key’ and up will pop a username (access key) and password (secret key). These are one-off displays – after giving it a name and a description and clicking to continue, the secret key is gone forever. The only way to reset it is to delete the API entry and recreate it again. These values are intended to be exported as environment variables for use with rancher-compose (can also be supplied as arguments). $ export RANCHER_URL=http://server_ip:8080/ $ export RANCHER_ACCESS_KEY=<username_of_key> $ export RANCHER_SECRET_KEY=<password_of_key> I’m going to create a really simple stack for demo purposes here, using two files. My first is a docker-compose. yml file. Nothing too complicated here, I’m just defining that favourite of examples, a Wordpress stack: wordpress: image: wordpress:4.2 links: - db:mysql wpdb: image: mariadb environment: MYSQL_ROOT_PASSWORD: example The second is a really small rancher-compose.yml file: wordpress: scale: 2

wpdb: With those in place, and with my environment variables properly exported I can bring my stack online: $ rancher-compose -p stack1 up There’s a lot of handy output to the command line, but the nice thing is to look at the Rancher UI and see things starting to fire up. Heading to applications/stacks will show the newly born blog platform struggling to come to life. Literally so perhaps; during testing MariaDB complained bitterly about not having enough memory to start. In my case, it turned out to be Rancher attempting to start it on my going-against-best-practice node which was already running Rancher. Here’s the thing though – Rancher caught this failure and moved it to another node, where it happily came online (it’s probably worth bumping the vm_mem setting in the Vagrantfile for anyone intending on following this walkthrough though to make things easier). In my example config, I’ve defined the Wordpress application servers (two were brought up by Rancher as requested) in a slightly odd way – no public ports being exposed, and what looks like an old version of Wordpress installed (definitely a no-no in these days of rampaging script kiddies to leave old versions kicking around). Let’s address the ports issue first. I want to load balance my incoming traffic across these two nodes, using Rancher’s built-in facility. For the sake of this tutorial, I’m going to do this via the UI. Back at my stacks page, I can click on ‘Add Service’, then ‘Add Load Balancer’. After defining the ports and target service I want to load balance (pictured, p50), I can simply bring the load balancer online (I have to click ‘start’ for it to actually come up). Disappointingly for me, Wordpress really seems to struggle in my test set up (I got quite a lot of 504 gateway timeouts in my experiments). Perhaps I really do need to upgrade? Heading back to my docker-compose file, I’m going to change the line reading: image: wordpress:4.2 to image: wordpress:latest Now I can show live upgrades taking place (it’s handy to have the application/stack view open here). $ rancher-compose -p stack1 up --upgrade --pull After a little while (the latest Wordpress image needs to be downloaded from the internet) Rancher will start up two new containers running the new image and flag the service as being in the upgraded state. At this point (and after some testing might be done) it’s possible to click ‘finish upgrade’ to confirm the changes and Rancher will clear up after itself, removing the old containers. Impressive! As ever, I can only give a flavour of a platform in the space we have, but I really like Rancher and think it’s one to keep an eye on. LXF

www.techradar.com/pro

Impress your co-workers and clients with your mastery of complicated upgrades. Just don’t let them see this screen which shows how easy it really is.

April 2016 LXF209 51


The best new open source software on the planet Alexander Tolstoy lays out another stall of tasty free and open source applications caught fresh from the interocean this very morning just for you.

MyPaint Moksha Knemo Widelands Duckhunt-JS

WebcamStudio KeePassX Retext ABCDE

FET

Opus

Drawing software

MyPaint Version: 1.1.2 Web: http://mypaint.org

D

igital artists tend to group into communities and discuss each other’s work at web communities like DeviantArt and Behance. There you can notice which specific software they use and try it yourself. It’s a good idea to start your own amateur sketches using best practices from gifted people you admire. In the Windows world most artists use Corel Painter and Artage, while Linux lovers are rocking with Krita. So, how does this affect MyPaint? Well, this small program starts to matter if you’d like to make drawings and sketches in Linux and want a fast GTK-based application.

MyPaint doesn’t offer as many sophisticated features as Krita, but it’s certainly very capable for those who just want to get creative and don’t demand too much. One of the most valuable features in MyPaint is the ‘endless’ canvas, which you can drag by clicking the left-mouse button while holding down the Space key, and also zoom in and out simply by scrolling the mouse wheel.

MyPaint may look similar to Gimp, but the painting program provides specific artistic-style drawing features.

“One valuable feature in MyPaint is the ‘endless’ canvas.”

Exploring the MyPaint interface Toolbar icons They are now monochrome and much less distracting.

Left sidebar By default there are freehand options with some brushes, but you can dock anything here.

Preferences

Panel triggers

Layers

You can tell each input device exactly what it should do in MyPaint.

Buttons that control panels visibility are grouped on the right side of the toolbar.

Certain panels look as if they were taken from the Gimp.

52 LXF209 April 2016

www.linuxformat.com

There are floating panels along both sides of the screen and you can turn them on or off using the respective buttons on the main toolbar. These panels look very similar to what you can find in Gimp (such as layers), but MyPaint has a much stronger focus on artistic tools. There are different packs of brushes that mimic pencils, crayons, ink pen, charcoal and numerous brushes with combined effects. For each drawing tool you can change brush size, pressure, sharpness and a few other configurables. Playing around with MyPaint can result in an even more satisfying experience once you get a tablet or a similar input device other than your mouse. MyPaint auto-detects all such devices and allows you to set custom behaviour for them in the applications’ Preferences section (look for the Devices tab). If you’ve been a MyPaint user before, you’ll find many positive changes since the last release, which was three years ago. There’s a handy brush history pane; all panels are dockable and support tabs; new vector layers are supported; and more brushes were added into the collection – plenty to motivate Linux graphics enthusiasts.


LXFHotPicks Desktop environment

Moksha Version: 0.2.0 Web: http://bit.ly/MokshaDesktop

M

oksha is a Sanskrit word, which means ‘emancipation, liberation or release’ and fits nicely into naming strategy of the Bodhi Linux distribution (distro) project. The distro, you may recall, is perhaps the most widely adopted Linux distro that uses the Enlightenment desktop by default. That is, it used to do be as Bodhi has forked Enlightenment 17 (E17) and called it Moksha. Jeff Hoogland, one of E17’s and Bodhi’s developers, says that the reason for the fork was justified by lots of tear-downs and regressions made in E18 and E19. This has led, he says, to the Enlightenment desktop being less lightweight than before and much harder to use on older hardware. Enlightenment was also forked because certain old features (eg XEmbed-based tray icons) were discarded, in the same way that the

Mate desktop was forked from Gnome 2 and Trinity from KDE 3. Moksha is a modular project with an essential core that can run on a 300MHz CPU and 128MB RAM, and auxiliary modules that extend desktop functionality, eg users are advised to install moksha-pulsemixer for volume control, places for managing pluggable storage devices or the Engage module for application launching and switching between running applications. Moksha developers plan to keep polishing the existing codebase while backporting some useful features from newer Enlightenment releases. The most obvious way to try out Moksha is

If you do try Bodhi Linux, your desktop doesn’t have to be green, there are dozens of themes for Moksha.

“Bodhi has forked Enlightenment 17 and called it Moksha.”

to use Bodhi Linux starting from version 3.1. If you do try it that way, you’ll find that the distro is fast and user-friendly and benefits a lot from being an Ubuntu-based project. You can also have a glance at a brief description of Moksha at http://www.bodhilinux. com/moksha-desktop with links to Moksha builds for Debian, Sabayon and Arch Linux. Of course, both Moksha and Enlightenment take time to get used to, but it could be the desktop you’ve been looking for.

Network tool

KNemo Version: 0.7.7 Web: http://bit.ly/KNemoMonitor

M

any of us use broadband internet, which means there’s little sense in monitoring network traffic and counting megabytes. But even in 2016 there are situations where using network tools for managing local connections still matter. Some users still have pre-paid plans with traffic caps while others want to keep an eye on their provider and whether they are supplied their promised speed. A few years ago, when dial-up access to the Internet was still popular, the KPPP tool was popular and gave Linux users great power and lots of helpful feedback. Nowadays there is a more up-to-date tool called KNemo. This is a small utility from the KDE world (although it works well on GTKbased desktops too) that sits in your system tray and stores details about your network interfaces. By default the KNemo icon is just a couple of

animated emitters, but it’s possible to display something more useful by rightclicking the icon and going to Configure KNemo... > Interfaces > Icon theme. For instance, the Text theme means that KNemo will show outgoing and incoming traffic rates right in the tray while more details with live updates are available in the Traffic tab of the main KNemo window. Another cool feature is traffic visualisation, which enables you to evaluate download or upload speed. To access it, simply right-click on KNemo icon and select Show Traffic Plotter. The graph is updated once a second and automatically adjusts the

KNemo visualises traffic clearly and shows that we’re receiving a lot through our local wireless access point.

“KNemo shows outgoing and incoming traffic rates right in the tray.” www.techradar.com/pro

scale in case of higher peaks or troughs. In many repects, KNemo packs the most demanded network connection details into a neat interfaces: It enables you to quickly find out your IP as well as subnet range; broadcast address; gateway; and also various details on your wireless connection (if you have one). KNemo will be a welcome addition to any Linux systems where there’s no NetworkManager tray applet for some reason. Even if there is, both tools complement each other well.

April 2016 LXF209 53


LXFHotPicks Live video mixer

WebcamStudio Version: 0.73 Web: http://bit.ly/WebcamStudio

W

ebcamStudio fills quite a specific niche in Linux applications – imagine that you can record video from your web camera, mix that footage with several other video files using smooth transitions and stick a nice background melody over everything! WebcamStudio has borrowed a feature from non-linear video editing, mixed it with a desktop screen recorder and spiced it with live streaming and broadcasting – and as you’d imagine it takes some time to learn the basics. The application window hosts a number of areas of which the most important is the central ‘desktop’ space which has lots of tabs on its top border. From the tabs you can collect all media sources you plan to use in your broadcasting, such as video clips, audio files, pictures, recorded desktop sessions and, of course, the webcam live view. Although the ‘desktop’ area is

small, this is the light table where you can arrange your sources and adjust their size and opacity etc. The lower-left panel of WebcamStudio is populated with ‘channels’. These are simply shortcuts to your source videos. Once you create a channel for each source, you can easily switch between them and play them with narration. To make the whole package more professional, you can use transitions and video effects from the Source Properties panel. You can let others adore your work by redirecting your stream to one or many available outputs. There’s quite an extensive list and outputs include audio-only (resulting in radio-like

Luckily, all those tiny buttons at the upper part of the program have helpful tooltips.

“Easily switch between each source and play them with narration.”

channel), Twitch, IceCast or another community-supported online service or even a dumb virtual camera for testing purposes. Webcamstudio is a Java-based application and, we have to admit, there aren’t many Linux builds that are available right now. You can always try it on a Ubuntu-compatible distro, by using ppa:webcamstudio/webcamstudiodailybuild.

Password generator

KeePassX Version: 2.0.2 Web: www.keepassx.org

R

emembering the dozens of passwords for email services, forums and numerous personal accounts throughout the web can lead anyone to despair. Many people choose the easiest – and most dangerous – solution of using one password for everything. Obviously this is a bad, yet widespread, practice that erodes the whole idea of personal data security. A better way would be to create a mnemonic rule which enables long and complex password, but not many people use it. KeePassX is a cure for everyone who needs stronger security with all the passwords stored ready to use. KeePassX is a similar tool to KeePass 2, [see HotPicks, p59, LXF183]. Both applications share most of their features, but KeePass 2 for Linux uses Mono framework with GTK controls, while KeePassX is written in pure C and uses Qt controls.

54 LXF209 April 2016

The general idea is to collect your passwords and other sensitive data under a single encrypted database, protected by a master password, and this is the only character sequence that you are strongly advised not to forget! After starting KeePassX for the first time, you have to create a new database with master password and optionally with a key file (an XML with a generated hash). Starting from version 2, KeePassX has gained full support for the KDBX database format and is able to use DB files from KeePass 2 as well. After you log into a database, you can start creating entries for sites that you access with the usual username and

Strengthen your security by using a trusted and encryption-savvy password manager.

“KeePassX has gained full support for the KDBX database format.” www.linuxformat.com

password pairs. KeePassX enables you to specify the site URL and set password expiration dates for each entry. There’s also a stylish configurable password generator where you can set password length and complexity. The auxiliary tabs also enables you to associate additional notes and data to an entry, including a custom icon. Entries can be grouped and sorted in a tree view in the left part of KeePassX main window, so it’s a great way to classify your accounts and generally keep things in order.


LXFHotPicks Timetabling tool

FET Version: 5.28.4 Web: http://lalescu.ro/liviu/fet

W

e’ve unintentionally begun curating another series of programs on one topic in Hotpicks and that’s educational aids. In LXF205 we reviewed Intef-eXe [aka eXe-Learning, HotPicks, p54] which helps tutors test their students. This time we’re covering FET (Free TimeTabling Software) that’s designed to help organise the educational process by creating timetables, where common issues tend to occur, eg with room allocation issues, conflicting time slots, lack of teachers and missing subjects etc. At the heart of FET is an original algorithm that automatically resolves timetable conflicts and adjusts all activities accordingly. FET is the kind of software that looks neat and tidy, yet it’s nearly impossible to start using it right away without reading the comprehensive user manual (http://bit.ly/FETManual), even

doing something wrong shows you informative tooltips. First, you need to create and save a configuration file on the File tab and only after that proceed to other tabs. The Data tab has some essential entities, which are all obligatory and need to be filled in. You need to add an institutional name, student groups, teachers, rooms and buildings and link them via activities. Next, you need to define years; divide years into months, semesters or quarters; and set time constraints and assign them to previously created activities. There’s really a lot to be done before you can finally generate a new timetable and we strongly advise you to consult the manual instead of winging it.

Devote some time to the initial set up of FET and save a lot of time in future.

“At the heart of FET is an algorithm that resolves timetable conflicts.”

But in the end you’ll get error-free timetable, which you can view in the Timetable menu under various View variants. Each timetable is a HTML+CSS document designed primarily for electronic use, but you can also print it from the web browser (just hit Ctrl+P). The look and feel of the resulting HTML files can be adjusted in Settings > Timetables > HTML level of generated timetables.

Audio codec

Opus Version: 1.1.2 Web: http://opus-codec.org

T

here was a time when advanced users cared about the codec for their local music library generally because the focus was on smaller file sizes that didn’t compromise audio quality too much. MP3 has been the king of the hill for two decades, and its rivals – mostly WMA and AAC – haven’t become as popular, even though AAC offers better quality at the same bitrate. Nowadays music has gone online, but if you want to run your own streaming server or record podcasts, you soon realise that it’s time to get back to basics. In fact, there are several top-quality audio codecs that are efficient and open source, such as Ogg Vorbis or FLAC, but once you dive into compression details, some limitations will start to annoy you. The key issue is that there are different codecs for encoding speech and music. The speech group are

optimised for low bitrates, which provides better quality for the human voice (eg Speex, AMR), while the second group shines for hi-fidelity sound (eg OGG, AAC). The Opus codec is a smart combination of both worlds. By default, it produces a very decent result, comparable with other codecs in both quality and compression ratio. You can try it with $ opusenc input.wav output. wav , but the real deal starts when you downscale the bitrate. For instance, try comparing files produced by the following commands: $ lame -b 16 /usr/share/sounds/alsa/ Front_Center.wav ~/1.mp3

There are extra modules and 97 pages of API documentation for those who want to develop with Opus.

“Comparable with other codecs in quality and compression ratio.” www.techradar.com/pro

$ opusenc --bitrate 16 /usr/share/ sounds/alsa/Front_Center.wav ~/1. opus You’ll notice that the .opus file still sounds great despite being as little as 3k, which makes the codec perfect for interactive speech and audio transmission over the internet. The secret to its astonishing performance is that Opus incorporates Skype’s SILK codec for low bitrates and Xiph.Org’s CELT codec for high-quality records. In Linux, Opus-encoded files are supported by lots of players, including those based on VLC and FFMpeg.

April 2016 LXF209 55


LXFHotPicks HotGames Entertainment apps Strategy game

Widelands Version: Build19 Web: https://wl.widelands.org

T

here are a healthy number of native real-time strategy games on Linux. Some focus on warfare while others focus on transport and logistics. Widelands places an economics layer over the usual struggle with other tribes and waging wars. The game starts on a patch of land where you begin to develop your tribe. You can initially choose from three: Barbarians, Imperials and Atlanteans and each has its own positive and negative traits. Generally, Barbarians fight well but have poor science; Imperials have cultural superiority; and Atlanteans are smart and advanced in technological terms. Playing Widelands resembles many classic strategy games, but it’s most similar to Settlers II from the

mid 1990s. The objective of the game is similar to many other strategy games: collect natural resources (stone, wood and ore); build more houses; ‘grow’ workers and soldiers and finally attack your opponents in order to survive and become the king of the world. There are peculiarities in Widelands, some of which originate from the limited computer resources of the Settlers era, eg you can only construct buildings on specific map cells, which forces you to think carefully about how you distribute available land. Another thing to remember are roads, which are

Widelands forces you to carefully consider how best to use the terrain for building and collecting resources.

“Places an economics layer over the struggle with other tribes.”

obligatory in order for your economy to work properly. Widelands lends itself to many strategies, such as rapidly building an army to physically crush opponents or racing to occupy more land to outspend them. Widelands has a very detailed help system and a helpful tutorial mode. New users are advised to examine the game’s basics as it saves a lot of effort later on.

Arcade shooter

Duckhunt-JS Version: 2.0 Web: http://bit.ly/DuckHunt-JS

T

here’s something amazing about seeing old classic games – that used to require a TV set, an expensive video console with wired joysticks and video gun and, of course, a game cartridge – fit into a tiny archive of just 1MB. There’s a trend among open source developers to recreate such shooters in the form of browser games. Not only shooters, really, as we have lots of casual, platformer games and even strange creatures of the past (eg search for Windows 95 in a browser) that run natively inside your Firefox or Chrome browser. In many cases this is possible thanks to the combination of Node.js and JavaScript code. Duckhunt-JS is an all-time classic game published by Nintendo in 1984 for the NES (Nintendo Entertainment

56 LXF209 April 2016

System). As you’d expect, the goal of the game is to shoot virtual ducks that appear onscreen and score as many points as possible. Once you shoot the required number of ducks, you advance to the next round but if you don’t it’s game over. Also stolen from the classic, the hunting dog is happy to pick up your downed ducks and laugh at you when you miss. You don’t have to buy an ancient console and the NES Zapper light gun in order to enjoy this cool game, nor do you need a software NES emulator for Linux or any other workaround, as Duckhunt-JS is an exact clone of the

No real ducks were harmed in the making of this classic duck hunt review.

“Written in JavaScript and compatible with any modern browser.” www.linuxformat.com

original game written in JavaScript and compatible with any modern browser. To run the game first make sure that you have the NPM package manager for Node.js installed and download or clone the repository from the GitHub page, unpack and run $ npm install in its directory. To start playing you just need to open index.html file. The game’s also configurable: you can change the number of waves and ducks etc and cheat by increasing the timer.


LXFHotPicks Text editor

Retext Version: 5.3 Web: https://github.com/retext-project/retext

M

arkdown has become increasingly popular among web developers and geeks in general. It’s a simple markup language that enables you to create rich formatting and neat HTML documents without writing tags by hand and, at the same time, without having to blindly trust a graphical WYSIWYG editor. Retext is a Markdown text editor with a wide set of features designed to make writing Markdown documents easier and more comfortable. It also supports reStructuredText – a similar language with advanced features such as tables and index generation, The editor looks minimalist with a sparse white area for writing content and a modest toolbar above. It supports browser-like tabs so you can open many documents and switch between them. As soon as you know the markup syntax, you can start typing right away,

but sooner or later you’ll want to preview the result. There are different ways to do this in Retext. HTML markup with highlighted tags can be easily accessed from the Edit > View HTML Code menu. The pop-up window will be shown where you can select the code and copy it to clipboard. Note the Preview button on the Retext toolbar. Simply clicking on it will toggle markup or preview mode, but you can also click the arrow next to the Preview button to go with a Live preview. In this case both markup and the rendered HTML page will co-exist side by side, which is very helpful if you need to watch how changes in your code affect the

Retext in action, a green tooltip helpfully displays the line number and its character count.

“Click the arrow next to the Preview button to go with a Live preview.”

rendered page. By default a page is rendered using legacy KHTML engine, but you can change it to WebKit via Edit > Use WebKit renderer. Retext enables you to export your document via a convenient File > Export menu and not only in HTML, but also ODT and PDF formats. The project’s GitHub site offers a very useful wiki page with Markdown extensions and common tips for writing such things as math formulas.

CD ripper

ABCDE Version: 2.7.1 Web: http://abcde.einval.com

T

he easiest way to listen to your favourite music nowadays is to use a web service of your choice, but the most reliable, DRM-free and time-proven method is to open a case, take a CD out and enjoy it like you used to do in the good ol’ days. Up until a few years ago CD-ripping software was extremely popular for users transferring their music from physical discs to a local music library on a hard drive. Programs such as K3b or cdparanoia still exist and work acceptably, but there’s also a more elegant solution that doesn’t have too many required dependencies and does its job without asking too many questions and that’s ABCDE. The name actually stands for A Better CD Encoder. The idea behind this ripper and encoder is to do several things at once, such as grabbing data from CD (all of the disc at once or per

track), encoding it (eg using Opus), doing a CDDB or Musicbrainz query to fill tags and giving intelligible names to files and album directory. ABCDE is also capable of normalising the volume before compression and can be easily configured for any sort of behaviour, eg you can use ABCDE to encode a whole album into a single FLAC file with an embedded cuesheet or play with different encoders and rip each track to a single file. ABCDE is a command-line tool, which makes it excellent for automating various routines in your shell. There’s also an impressive number of options in the /etc/abcde.conf that can be

Before writing your own shell script, take a look at ABCDE for automated CD ripping and encoding.

“The idea of this ripper and encoder is to do several things at once.” www.techradar.com/pro

superseded with your own customised copy in $HOME/.abcde.conf. The official project site has very helpful configuration templates that you can simply copy and paste into your setup. ABCDE works flawlessly and only needs cdparanoia, a working internet connection (for online queries) and the encoder that you want to use. LXF

April 2016 LXF209 57


Pi user

Giving you your fill of delicious Raspberry Pi news, reviews and tutorials

PETE LOMAS Co-founder and trustee of the Raspberry Pi Foundation.

Welcome... he Raspberry Pi was initially designed to improve ‘hard core’ computing skills that largely disappeared when the IBM PC caused the demise of home computing (as it was known back then). Being a low-cost hardware platform coupled with Linux, the Pi addressed this initial goal. However, our decision to include a 26-way GPIO connector galvanised a community of makers. They designed and built addon hardware and support software for their own needs, and they freely publish and maintain their exploits for others to use and adapt. Today, supported by a refined and extended PiHat/GPIO connector, there are hundreds of boards with supporting code modules providing a wealth of building blocks for maker and commercial projects. With Raspberry Pi you can go from bit twiddling on an I/O port to artificial intelligence and beyond. Its simple connectivity provides endless opportunities for remote control and monitoring: from tweeting bird boxes, and collecting data in space! When recruiting I’m always interested in achievements outside taught subjects. Inspired maker projects show skill and determination way beyond grades on a sheet of paper. The computational thinking necessary to breakdown a project into manageable targets and research solutions, and then implementing and assembling that into a final product is a primary skill that can be reused in many aspects of modern life. Do makers learn en-route to completing a great project? In my opinion, unequivocally they do, after all isn’t that half the fun?

T

58 LXF209 April 2016

is exactly 4! How did the Raspberry Pi and its Foundation manage to conquer the world? We take a look back on over our coverage and where it’s heading next.

F

rom the back of a napkin to international hardware-stardom, the Raspberry Pi has become a phenomenon. Having now glided through the 8 million sales mark it’s not only the fastest selling PC in history, it also claims the crown of the biggest selling UK PC in history, which trumps the Amstrad PCW. But how did the Raspberry Pi come into existence? How did it become such a success? (Hint: a small black penguin helped.) Who were the people that got it, to where it is now? It has been something of a whirlwind experience and with the Raspberry Pi hitting fouryears old, we’re looking back over the extensive Linux Format coverage to bring you a brief history of Pi, told in large part, by the people that made

the Pi in the first place. We’d like to point out that it’s very easy to take the success of the Raspberry Pi for granted. But without the grand vision of its creators; the charitable status of its Foundation; the constant development of its GNU/Linux OS; and the endless enthusiasm of its community, the Raspberry Pi would never have been so successful and could easily have found itself consigned to being another also-ran board. A measure of that success is that despite constant competition from wannabe Pi imitators; despite the existence and expansion of Arduino; despite Intel and others launching x86 stick PCs; and despite Microsoft trying to buddy it up with Windows 10, the Pi retains its best-selling status and remains a firmly Linux-oriented device.

“The Pi retains its bestselling status and remains firmly Linux-oriented.”

www.linuxformat.com


Happy Birthday Raspberry Pi

David Braben

Braben was a believer, which is why the codeveloper of Elite was onboard from the start.

T

he device is called the Raspberry Pi. Device? Computer, really. ‘Device’ suggests singular function and intent, and this is anything but. The obvious headline is that it can run Quake III Arena, “but I think it’s fair to say it’s quite a way beyond that in terms of what it can do,” David Braben points out. “It will be able to do things that you’d consider a lot more contemporary, but these are the things available freely that we can show running easily.” Besides, that’s not the real crux of what Pi is all about. Yes, it runs on a Linux core and presently boots to the familiar LXDE, to be exact, which fits snugly into the cheaper model’s 128MB (as opposed to 256MB) of RAM – which in time will be hardwareaccelerated and silky smooth. And, yes, it is being looked at by the makers of open-source media champion XBMC, and can decode 1080p video through its Broadcom GPU. So, yes, it packs a lot of power for something smaller (and almost cheaper) than the average portable USB hub. Thinking of it in those terms, though, risks slotting it into the profile of any old homebrew device. “That’s not the point,” Braben says. “There’s a huge gap between [making usergenerated content in] Halo Forge, Rollercoaster Tycoon or LittleBigPlanet, and the things at the top-end like XNA where you’ve gotta know your bananas to get engaged in it. For me, the BBC Micro crossed that gap. Actually, the bottom bit didn’t even exist back then, but it shows that there’s a will to learn ‘programming Lego’.” By being small and cheap enough to find its way into every schoolbag in this country and beyond, Raspberry Pi could potentially become a constructionist tool for all kinds of classroom

scenarios. Kids can learn by making virtual objects, directed by a mix of syllabus and whimsy, and in the process demystify the code behind things they use on a daily basis. Games, gadgets, phones and other everyday machines: all are deceptively simple, Braben says, when you look beyond the UI. Speaking of which, much about the Pi has yet to change on that front. The version currently slated for release in December [2012] is a developer board, naked as the day it was soldered, and lacking in bespoke software. In Braben from November 2011 – EDGE #235. time, though, it should boot to something more familiar to times gone by. “We’ve got something ON GETTING THE WORD OUT that MIT did called Scratch, which we actually showed running on a recent Newsnight piece. That sort of thing… you look at it and go, ‘Oh, I could do that.’ A PC is very daunting and very, very “though there may or may not be a problem easy to mess up, or at least there’s the fear of with those magic three letters.” that – that it’ll never, ever boot again. And it’s Hesitant to fall back on the trendy phrase not entirely unfounded: you delete the wrong ‘crowd-sourcing’, Braben nonetheless admits DLL and it’s really hard to get your PC back if that community is key to building up you don’t know what your kid did. That’s a Raspberry Pi’s software library. With 50 units in really big problem in schools.” the wild now, 10,000 ready to be bought from There’s no permanent storage on its website, and deals in place with “a very wide Raspberry Pi, just a single SD card slot that range of different people: from education to contains all it needs to boot and operate. technology companies to well-meaning Though Braben posits (with a grin like the individuals who have great pieces of software,” Cheshire Cat) that you could run BBC BASIC the hope is that the community runs with the solely from the CPU’s primary cache with the board – with the idea behind the board – and speed of Assembler, it’s important that a takes it to that place resembling Braben’s ‘bricked’ unit can be restored just by swapping youth, where computers meant programs, and in a new SD card. There’s every intention, programs meant possibility. furthermore, to actually include BASIC,

“We actually showed Scratch running on a recent Newsnight piece.”

The Pi line - when each Pi was ready for consumption Pre-Pi

Proto-Pi – May 2011

Alpha Pi – August 2011

Beta Pi – December 2011

Cretaceous period (2006) Eben Upton had the idea of an easily built home PC kit. Behold the Atmel ATmega644 system, which sat on a Veroboard with a block SRAM. Running at 22.1MHz it had to drive the display too, leaving blanking periods for processing. Upton decided a SoC C capable of running a general OS S would be more useful, good lad d! http://bit.ly/RaspPi2006Ed d

Early in 2011 David “Elite” Braden was showing off a prototype USB wa w was calling the Raspberry bit.ly/ yPIOnB ). More ck than ea Pi have today, h cept was dro d for one t enabled the ice to ha re conne d the crucia PIO pins.

An alpha board went into production and was ready for testing in August 2011. This was used to demo the capabilities of the final Raspberry Pi build. Still cap running Debian and accele 3, the board was sl the final d e public at w o2011) p://bit.ly/P er t in er 1.

Beta PCBs designs were completed in November and finished boards rolled into the spiritual Pi HQ just before Christmas 2011. It was soon discovered there was an issue in manufacturing involving a replaced component. This caused delays in the final release boards.

www.techrad

om/

April 2016 LXF209 59


Raspberry Pi Happy Birthday

Pete Lomas

Pete Lomas designed the original Raspberry Pi board, Neil Mohr talks to him about the Pi story. Linux Format: How did you get involved? Pete Lomas: I’d built a large FPGA [Field Programable Gate Array] for Imperial College and went to an open day to see it running and I sat next to a chap called Alan Mycroft who is the Professor for Computer Science at Cambridge. He said there’s a guy at Cambridge called Eben Upton with an idea for a small board to teach programming. I thought that’s such an inspired idea and went to have a chat. Within half an hour I said I’ll design the board; send me the bits; show me what we want to do and I’ll make them on my production facility. Me and Eben are usually called Bill and Ben. Eben does most of the BBC work as he’s on secondment from Broadcom and we’re very grateful, as without their early support to let Eben get on with it, we wouldn’t have been able to do it … my company let me out, but then I own it. LXF: Did you expect such success? PL: It’s completely bonkers. We started off planning to make two to three thousand [Pi boards] over three years, so I foolishly said: “I have manufacturing facilities, send me the chips. I’ll design the board and I’ll make it.” That’s how it all started, – I’ll make it for you as it’s such as good idea. Rory Cellan-Jones had talked to David Braben [co-creator of Elite]

ON LEARNING

another of the trustees, and he said to him ‘We’ve got this thing, do you want to come look at it?’ And it was a really early version of the Pi, but it had all the GPU functionality and they did a YouTube clip. That got over 600,000 hits in a few weeks, so we knew we had something. Then all of a sudden it all blew up and we had expressions of interest to buy more than 200K. LXF: The Pi seemed to appear alongside the news coding isn’t taught in UK schools. PL: There’s something very important about the whole process. If you have a target, it’s not necessarily about how to compute, but to achieve something else. Then all that programming comes osmotically through that process of wanting to learn. LXF: Instead of saying learn to code, you give them an end result? PL: It’s like a for loop, they’re all going to sit there and ask what do you need a for loop for? Why do I need to do that? But if they’re using something like Sonic Pi, then you say this is how you make music. If you want to repeat that phrase of music use one of these, it’s called a for loop and you put the number of times you want to repeat in there. Immediately that piece of coding has a function, has a use, it has a purpose. It’s part of a component system to get you to a target of making that music better and they’ll embrace it. Then when they go on to do some coding with Python they’ve already been introduced to those concepts. So they

“You’ve got to have great insight into how code works to understand why it doesn’t.”

Lomas from February 2014 – LXF181.

have a little bit of familiarity there, it gets them started quicker. LXF: Then you add in Minecraft… PL: I have to say that’s a brilliant program. When I first saw it, I think, OK, it’s just one of those games. But then I took the trouble to sit down with my son for a couple of hours and he showed me what you could do and it’s just amazing. The full ability to fabricate this digital Lego, but to add pipe work, electronics and he says he can make a cannon that fires TNT… The important thing [for the Pi] is you can go outside of the screen as it has the GPIO. You can never compete with Grand Theft Auto in terms of the visual appeal, but what it can’t do is make something move on your desk. I would say more computers exist in this embedded environment where you can’t see them, than visibly in the universe. LXF: We’ve got all these amazing gadgets, but their inner workings are hidden. PL: You’ve got to give the kids the tools to be creative. It’s akin to saying we’ll teach the kids to write; we’ll teach them English, but then never let them write stories. We’re just going to do repetitive grammar. How mean is that?

The Pi line – 2012 to 2016 Pi Model B February 2012 The Pi Foundation planned to release two models and pre-orders opened at the end of February 2012 on the more popular Model B. This higher cost version included an Ethernet and additional USB port. Unfortunately, an incorrectly sourced type of Ethernet port – the initial 10,000 units had to have the Ethernet port replaced – and accelerated compliance testing delayed its initial shipment into April 2012.

60 LXF209 April 2016

May 2012 20,000 units shipped

Pi Model B Rev 2 September 2012 Production moved to the United Kingdom and in conjunction with that change the Raspberry Pi B had a PCB revision 2 and was about to receive a memory upgrade to 512MB (in October 2012). The Rev 2 board cleaned up issues with the original design with components choice and mounting hole issues.

www.linuxformat.com

Pi Model A November 2012 A cut-down Model A was always part of the Pi plan, but meeting demand for the Model B meant that production of the A board was pushed right back towards the end of 2012, with the first boards arriving around December 2012. The original design had them using 128MB of memory, but the final model had the amount bumped to 256MB.


Happy Birthday Raspberry Pi LXF: Is that an area the Raspberry Pi foundation is looking to help with? PL: It’s unfair to teachers to say here’s a new chunk of curriculum you’ve got to learn and digest. Where are you going to get your continuous professional development to achieve this from? We’ve got to empower them, as teachers passionately want to educate kids. I see the Pi and the computer science side of it as being another tool to get them fired up and making things. You can take a Pi into a school and show them off and engage in that way. But to integrate it into the curriculum takes a lot more work and that’s our focus now. LXF: Does it fit into the curriculum? PL: The Pi can fit in to meet the curriculum for sure, but equally you could use something else. We don’t want to be prescriptive, I think that’s a fatal mistake that people make. I’d much rather we had teachers coming to Raspberry Jams … and observe the children. LXF: For IO you really don’t need a vast amount of processing power. PL: But when you do, for instance, with the camera, that data is handled by the GPU. So that takes most of the load off the CPU. The GPU does all that for you, it dumps the frame. If you want them JPEG compressed, it’ll do that too. If you look at the chip in terms of loading, you’ve got this squitty little chip in the corner and the rest is GPU and power supply, but the combination works well. But overall, there are around 100m transistors on there. LXF: A large part of that is the original concept of having the IO on there? PL: It certainly helped. I suppose you could have done a lot off the USB with a lot of mangling, but the idea was to give you the lowest level. The equivalent of the “Hello World” program is to make a single LED flash, then you can expand that by putting a button next to the LED. Then an exercise I set is make it flash like a lighthouse.

September 2012 400,000 units shipped January 2013 1 million shipped April 2013 Pi camera released November 2013 2 million shipped

Eben Upton

Back in June 2013, when the Pi was nearing two million units, we interviewed ‘Mr. Pi’ himself. Linux Format: How difficult has it been to adapt to the success of the Raspberry Pi? Eben Upton: I think a lot of the heavy lifting in terms of the logistics and the scale has obviously been taken on by RS and Element 14 (the two manufacturers of the Raspberry Pi). LXF: Had you always planned to use them those two manufacturers? EU: That was the idea, originally, because even before we launched we realised we’d probably have success beyond the scale we’d ever be able to cope with, and we had enough money to build the 10,000 – we had a quarter of a million dollars … I think it was the point when we released the first SD card image for the device and we had 50,000 downloads, or the buggy alpha quality operating system from the team of people who didn’t know whether it was going to exist yet, that was the point where we realised we were in trouble [laughs], and that we needed to think again about our model. LXF: This is at the end of 2011? EU: Yes, so if we’d stuck with our original model I think we would have spent the whole of last year dealing with the first day’s demand. While it felt like we spent the whole of last year dealing with the first day’s demand, we actually only spent three to four months. We certainly wouldn’t have got anywhere – we might have got to 100,000 units in our first year if we were really lucky, instead of being able to get to a million scale. LXF: And how has the whole project changed for you on a personal level? EU: Certainly there is a sense… I wouldn’t say

Pi Compute Module April 2014 With demand for the Raspberry Pi coming from industry, a new slimline model was needed for easier deployment in embedded-type installations. The Pi Compute Module shifted the Pi onto a memory-type DIMM board and removed many consumer ports.

Eben Upton in June 2013 – LXF173.

it’s less fun, but it’s more serious. Because there are people whose jobs depend on this. It was never the idea for there to be people whose jobs depended on this, and not people here but people who make them and the people who distribute them. There are significant numbers of people – probably a 100 people – who owe their livelihood to the Raspberry Pi at the moment. At the volume’s we’re running at now, of course, a week’s production is 30, 40, 50 thousand units, depending on the week, and you could damage

May 2014 3.14 million shipped

Model B+ July 2014 The first redesign comes through. The Pi stays fundamentally the same – the same SoC and memory – but the GPIO is given a revision, much needed USB ports and the p prrice drops.

Model A+ November 2014 The whole concept of the original Pi Model A is given a revamp. The new Pi Model A+ is designed to be a cut-down consumer Pi-compatible board offering only the bare minimum ports while demanding the lowest power draw of all the Pis – though the Pi Zero now beats this model. February 2015 5 million shipped

February 2014 2.5 million shipped

www.techradar.com/pro

April 2016 LXF209 61


Raspberry Pi Happy Birthday the business and then people would lose their jobs. So it’s not so much that that’s not really fun, but it does focus the mind. LXF: Did you envisaged the Raspberry Pi to be both a foundation and a business. EU: I think it’s really important that if you wanted to be sustainable, that’s the thing, I think this idea that charity doesn’t scale – we wanted to be a charity; we are a not-forprofit. But the money that gets made gets ploughed back in. I don’t take a salary. I’m lucky enough not to have to take a salary. LXF: Would that ever change? EU: I could at some point I suppose. I do work, basically, more than full time – more than what most people would consider full time for this organisation. But I’m still employed by Broadcom and the company has been enormously generous. I think there’s always been this suspicion that it’s Broadcom marketing, and I think that this fuels the idea. I think we’ve just been lucky with this thing, and it took a while to persuade Broadcom this was worth doing. LXF: Does that affect whether the hardware schematics will be released? EU: The schematics have been released but not the Printed Circuit Board (PCB), and that’s an interesting question. Would we ever release the design of the PCB? The intention has always been to release the design of the PCB; it still is. The issue is that you really can’t buy the chips. LXF: Is that why you’re not releasing the design of the PCB? EU: If we release the design of the PCB, the reason why you can’t build a Raspberry Pi is because you can’t buy the chips. And a certain amount of… opprobrium… there were a small number of people who, I think, were very offended that we haven’t given away all of our IP. And right now, that opprobrium is

heaped on us. Now if we were to give [the IP] away, I know where the opprobrium would end up. And I believe that Broadcom has been enormously supportive and I don’t believe it deserves that opprobrium. So for now I’m happy to sit there and absorb those brickbats.

EU: By doing a lot of software work.

LXF: And you don’t want that diluted by third parties building their own… EU: Absolutely. All that would happen is that we’d end up destabilising the foundation; the foundation’s ability to invest in education; the foundation’s ability to invest in funding open source projects for no tangible consumer benefit. There would be an ideological benefit, because we’d be able to tick a box that says, ‘Open!!’ and that would be it. And then there would be this problem that I think opprobrium would be heaped on Broadcom if it didn’t then make the chips available.

LXF: And a default installation of Raspbian, what would that use? EU: Currently, it uses the video output path, the USB controller; stuff that was there before the ARM. The ARM made the chip 3% bigger. You’re beating up mostly on the ARM, a little bit of system infrastructure, an SDRAM controller and a few tiny peripherals. A lot of the system is asleep most of the time. But in any case, the 700MHz on ARM it’s still a powerful processor, but it is by historical standards a pretty beefy device. Because we’re pinned in one place in our hardware, we’re doing more work on the software side to get all of the juice out of it. So we’ve spent a lot of time optimising system-level components, hopefully upstreaming, optimised versions of system-level components for Linux – so, pixmap. Optimised versions of things like memcopy and memset. We had an interesting debate about X acceleration. We don’t have an X accelerator, we don’t have an acceleration driver. We have a lot of components of the chip, a lot of sub-systems, which can be used to influence the X server accelerator. And it’s actually OK. Software X is OK. I’m always surprised by how OK it is. It’s ARM moving a pixel at a time, although now we’ve done this pixmap stuff it’s an ARM going ‘pixels – bang, bang, bang’. Insofar as a fairly low performance ARM can move pixels fast, we now move pixels fast. We’ve had this debate about whether we do hardware X acceleration. LXF

LXF: Have you got a Raspberry Pi 2 in mind? EU: I think it would be really sad, and probably fatal for us, if we were still shipping the same Raspberry Pi in 2016, say. I think we’ll have to

ON RASPBERRY PI 2

“It would be sad, and fatal for us, if we were shipping the same Pi in 2016.”

do something but I don’t know what that something is … The real problem is that I can imagine boards I could build at any price between $25 and $85 … I can imagine a different board that I could build at each $10 increment. None of which are currently being built. But finding one that’s actually attractive, that’s got the same kind of attractiveness of the Pi… Pi’s attractive because it’s got a really interesting price/performance tradeoff. LXF: How do you maintain the momentum?

Raspberry Pi 2

Raspberry Pi Zero

February 2015 For its third birthday the Foundation released its most powerful device to date, the Pi 2. This was a formidable mix of lowcost mini-PC and a quad-core ARM processor.

November 2015 From out of nowhere the Foundation released its Pi Zero and the latest Pi is a tour-de-force of tiny design.

Delivering all the power of the original Model B but in a tiny package, with the lowest power draw and at a incredibly low price, the Pi Zero really raises the game.

Raspberry Pi 3 February 2016 There we were pondering on when the next Pi release would be and the Pi Foundation goes and releases it for us. If you haven’t already, discover all the details in the news section on page 6! Now, when will the Pi 4 be released…

?

September 2015 6.5 million shipped November 2015 7 million shipped

62 LXF209 April 2016

LXF: So that’s where all the effort is now? EU: With 700MHz, [the Pi] is an enormously powerful media accelerator. The chip is 97% media accelerator.

www.linuxformat.com


Pi accessories

Kano Screen Kit In his quest to create the ultimate Raspberry Pi portable hack station, Les Pounder finds a portable screen that will fit in his rucksack. In brief... A portable Raspberry Pi screen that offers HD resolution in a strong and kidfriendly chassis. The Kano ethos is to enable learners to experience ‘building’ their own computer and this is supported by a series of workbooks written with children in mind.

N

ew trends in Raspberry Pi accessories are like buses: You wait ages for a portable Raspberry Pi solution and then they all turn up at once. We have seen many different screens for the Raspberry Pi, including the official 7-inch screen and the next one to turn up is the Kano. This company knows packaging and marketing and the Kano Screen Kit comes in a quality box with everything well protected and labelled. The box also includes an assembly guide written for young hackers, designed to be similar to Lego instructions, that enables users to start building as soon as the box is open. The Kano screen measures 10.1-inches diagonally and is made with Gorilla Glass, which makes it well protected from knocks and scrapes. The screen comes as a kit which the user is encouraged to build. This kit has a strong plastic frame into which the screen is clipped and the frame can be laid down on a table in a similar way to an old school desk or upright like a traditional monitor. On the back of the screen is a driver board that connects it to a Raspberry Pi via a conventional HDMI cable, and attached to the driver board is a breakout cable which attaches to a screen controls board. There’s also a retention space to attach your Kano Raspberry Pi case to the screen, creating a rather nice single unit. The screen is powered via a bespoke micro USB cable that also powers your Raspberry Pi. Building the

Features at a glance

Adjustable display

Works with all

Provides two display positions and handy controls on the back to get your screen just right.

The Kano screen uses an HDMI driver board which means that it can also be used with other devices.

The screen is a balanced mix of portability and features. It’s clear to read at most angles and can be powered along with your Pi from a USB adaptor. (Keyboard not included!)

screen takes just a few minutes; the only tricky bit is attaching the driver board case to the screen.

Tough contender

We tested the screen with the latest Kanux distro and the latest Raspbian image. For both screens the initial view appeared upside down, requiring us to lay the screen down so that it looked like an old-school desk, and then rotate the screen 180 degrees in the OS. In Kanux this is achieved by going into the System Settings and selecting Display. For Raspbian you have to edit the config.txt file found in the boot partition and add display_rotate=2 as the last line. For both distros the desktop was displayed at the correct size and no further configuration was necessary, but using the controls on the back of the screen we could alter the setup of the screen if needed. The screen neither provides any audio output, so you’ll need to attach a speaker/headphones to the jack on your Raspberry Pi, nor does it have touch input. Picture quality is excellent, however, with the 10.1-inch screen running at 1,280x800 giving us 150 pixels per inch (PPI). The resulting image is bright and clear providing the ideal platform for hacks. The Kano screen is a tough, well built and specified product and it’s clear

www.techradar.com/pro

that their target market, young learners, is well catered for as this screen can receive a fair amount of punishment unlike other screens on the market. The Kano screen is currently priced in a sweet spot: it’s not as cheap as the official screen and doesn’t have a touch screen capability, but it is built to last and right now it’s a good alternative for users and schools when compared to other solutions, such as the Pi Top [see Review, p61, LXF207]. The lack of audio output is an issue, especially for those using the screen with the Pi Zero which has no headphone socket. The Kano screen is not a cheap screen solution, but it is one of the most robust at this price point. LXF

Verdict Kano Screen Kit Developer: Kano Web: kano.me Price: £110

Features Performance Ease of use Value

7/10 9/10 8/10 7/10

The Kano Screen is robust and merges portability with usability, offering a lot of screen for a fair price.

Rating 8/10

April 2016 LXF209 63


Raspberry Pi Scratch

Scratch: Coding with lists

Les Pounder shows you how the important programming concepts can be taught using Scratch on the Pi and then applied in Python.

W Our expert Les Pounder

works with the Raspberry Pi Foundation to deliver training for teachers across the UK. Les blogs at http://bigl.es about robots, hacking and all things makery.

COMPATIBILITY

ALL

Pis Quick tip Building with Scratch is easy but you can create some complex code. If things get a little confusing then pull a section of code out of the main body and work on it in isolation. This is an easy way test the code. When done return the code to the main section and test again for compatibility.

64 LXF209 April 2016

e take them for granted, but lists are incredibly powerful tools that we use everyday for anything from remembering our shopping, taking registers to just generally remembering the things we need to do. In programming they are used to collect data into a sequence (usually separated by commas). In this tutorial, we’ll use a list in two applications, first we create a simple quiz in Scratch that will use two lists to handle the questions and answers. We will then select the questions at random and use a method to ensure that the correct answer is selected in the answer list. If a player gets a question right then the game will instruct the player, but if the answer is wrong something else will happen. This will introduce the list concept. In the boxout (see p65) we’ll convert the quiz into Python code and show you how a typed language handles lists. For this project you’ll only need any model of Raspberry Pi and the latest Raspbian distribution. All the code can be downloaded from http://bit.ly/LXF209-PiUser-Lists, so let’s power up your Pi and login to the desktop. We’ll start with Scratch, the popular free visual programming language, and you can find that in the Programming menu, located in the top left of the Raspbian desktop under the main menu. The Scratch interface uses a series of blocks, located on the left of the screen, which are dragged to a coding area in the centre. The actions and output of the code are displayed on a stage in the top right of the screen. We’ll start our coding by ensuring that we have the cat sprite highlighted, left click on the sprite before continuing. Now click on the Variable palette, it will be blank but there are two buttons, click on ‘Make a list’, call the list ‘questions’, and make another list called ‘answers’. For each list we need to

As you can see, the completed project looks rather simple in structure, which should enable anyone to learn more about lists in programming.

www.linuxformat.com

By seeing the lists as blocks in the coding area we can quickly compare the contents of one list to another, ensuring that the correct answer is given for a question.

create a series of values, for questions we’ll create four questions and store them in the list. From the Control palette grab another ‘When Green Flag Clicked’ block. Under this block we shall add ‘Delete #1 from questions’ found in the Variables palette. Modify this block so that it reads ‘Delete All from questions'’. This will reset the questions every time the game is played. Now under this block we’ll place ‘Add thing to questions’ also found in the Variables palette. Now modify ‘thing’ so that it forms a question. We will now create the answers to those questions and store them in the answers list. The process is identical to the questions, but take care as you must ensure that we’re working with the right list! We now have two lists each containing four entries and the first question in the list has a corresponding answer in the first entry of the answers list. To tie the correct answer to the question we’ll create a variable, called item#.

Creating a variable

Now go back to the Control palette and drag the ‘When Green Flag Clicked’ block to the coding area. Next, grab the ‘Repeat 10’ block and attach it underneath the new ‘Green Flag’ block. We want the code to repeat for the number of questions that we have in our list, so in the Variables palette we can see ‘length of questions’. Drag this block and place it over the 10 of ‘Repeat 10’, the 10 should highlight to say that the block will fit inside, drop it inside the 10. Inside of the repeat loop we‘ll drop ‘Set item# to 0’ from the Variables palette. We’ll replace the 0 with ‘Pick Random 1 to 10’ found in the Operators palette. Now replace the 10 with ‘Length of questions’ found in the Variables palette. So now we can change the value of the ‘item#’ variable to be a random number between 1 and the length of the list. Still inside the repeat loop, we now use the ‘Ask what’s your name? and wait’ block from the Sensing palette, this clips under the set item block. The ‘Ask’ block is used to ask the user a question, so we need to insert a random question.


Scratch Raspberry Pi

Working with lists As we have learnt in the main tutorial, lists are a powerful tool in Scratch and Python. Each item in a list has an index, a number that identifies its place. In both Scratch and Python we can remove items from a list, but remember that if you remove item 2 from a list, the following items will have their index value reduced by 1, so in the quiz we’ve created this might mean that we expect the

wrong answer to a question, which isn’t very fair. As you can expect Scratch has a very simplified selection of tools to work with lists, but they are a great introduction for children to get the concept of lists. To learn more Python 3 has an extensive array of tools to work with lists and other data structures: https://docs.python. org/3/tutorial/datastructures.html.

From the Variables palette grab the ‘Item 1 of questions’ block and drag into the ‘What’s your name?’ space of the Ask block. Right now it will ask a fixed question. So lets grab the ‘Item#’ block from Variables and use that to replace ‘1’. Now we have a method to ask a random question from the questions list. With the question asked we now use an If..Else conditional statement which will test the answer given to see if is correct. The If..Else condition is inside the repeat loop. We need to write a condition to test, so drag the ' _ = _’ from the Operators palette and drop into the hexagonal space next to ‘if’. Now grab the ‘Answer’ block from Sensing and drag into the first blank space of ‘_ = _’ The ‘Answer’ block stores the answer to the question we were just asked. In the other blank box we need to use the corresponding answer to the question. So from the Variables palette drag the ‘Item 1 of answers’ block and place it in the remaining blank. Now replace the ‘1’ with the ‘item#’ block from Variables. Now we have a completed test that will check the answer given to what we expect it to be. Now we create the actions that will occur if the answer is correct. For my project I used a ‘repeat 24’ loop inside the if condition. In the ‘repeat 24’ loop I placed ‘Turn clockwise 15 degrees’ from the Motion palette. This will rotate my sprite 360 degrees (360/24 is 15 degrees). Once the rotation is complete, I chose to play a short piece of audio to reward the player. To import a sound click on the Sounds tab at the top of the coding area. Then import the audio by clicking ‘Import’ and selecting the correct audio clip from the dialog box. To use the audio drag ‘Play sound <select one> until done’ from the Sound palette and attach it under the ‘repeat 24’ loop, but still inside the if condition.

the ‘Looks’ palette. Next we grab the ‘Item 1 of answers’ block from Variables and then grab the ‘Item#’ block also from variables, this block replaces the ‘1’. We now need to create a step that will print the correct answer to the screen if the player gets an answer wrong. You can also change the look of your sprite by clicking on ‘Costumes’ just above the coding area and change the background by clicking on the Stage, which is bottom right of the screen, and clicking on ‘Backgrounds’ above the coding area. I chose a school building but you can use any of the built in backgrounds. We also added a ‘When Green Flag clicked’ block along with a forever loop that played background music for the game. Again, this is an optional extra but fun to do. That’s the code done. Remember to save your work. Click on the green flag and get ready to answer the questions. LXF

If and else

We now come out of the if condition and our focus moves to the else part of the statement. We don’t need a condition to test against as we have ‘If the answer is not correct, then else must be the only logical option’. Inside the ‘else’ condition I chose to use another ‘Repeat 10’ loop and inside there I used the ‘Change color effect by 25’ block from Looks, to change the colour of the sprite. Outside of the ‘Repeat 10’ loop but still inside the ‘else’ condition we will drag ‘Clear graphic effects’ this will reset our sprite to its original colour. Now we play another Sound by dragging the ‘Play sound <select one> until done’ block from the Sound palette. Remember: you can change the sound by importing a new audio file. Our last step for the ‘else’ condition uses the ‘Say Hello! for 2 secs’ block in

Sometimes we need to lay blocks on top of blocks and this can become tricky, but Scratch allows you to pull the blocks apart and retry just like Lego.

Get print and digital subs See www.myfavouritemagazines.co.uk/linsubs www.techradar.com/pro

April 2016 LXF209 65


Get into Linux today!

Issue 208 March 2016

Issue 207 February 2016

Issue 206 January 2016

Product code: LXFDB0208

Product code: LXFDB0207

Product code: LXFDB0206

In the magazinee

In the magazine

In the magazine

Stop hackers. Defend servers. Fight crime... OK, maybe not the last one, but you will learn the exploits so you can bullet-proof systems. Plus multi-booting with Grub, video encoders and Portage explained.

LXFDVD highlights

Fedora Security Lab, Manjaro KDE 15.12, Kali Light 2 and more.

Help your loved ones (and even people you don’t like) escape the Microsoft Complex. Uncover open source tools invading graphic design, plus build a Pi-powered NAS and find the best backup tools.

LXFDVD LX XFDVD highlights Linux Mint 17.3 (Cinnamon & Mate), OpenSUSE Leap 42.1.

Give your home some smarts using our Pi-powered projects and Linux tools. Discover the amazing KDE Plasma 5 and find out if Perl 6 is worth a 14-year wait, Perl, plus get the inside scoop on Pi Zero.

Issue 205 December 2015

Issue 204 November 2015

Issue 203 October 2015

Product code: LXFDB0205

Product code: LXFDB0204

Product code: LXFDB0203

In the magazine

In the magazinee

In the magazine

We howl at the perfect form of Ubuntu 15.10, pretend to review lots of video players by watching our old movies and take Unity for a spin. Plus we show you how to get gaming in Linux and coding in Lua.

LXFDVD LX XFDVD highlights Ubuntu 15.10 32-bit & 64-bit, Kubuntu 15.10 and more.

Stream it! Build the bestt Ubuntu media centre. Sync it! Our Roundup off n the best synchronisation tools. Code it! Use Glade e to design a lovely GTK interface. Er‌ Blend it? How Blender is taking Hollywood by storm.

LX XFDVD highlights Ubuntu 15.04, Kodibuntu 14.0, Emby, OpenELEC and more.

Our definitive guide to every key Linux distro (that you can then argue over with your mates), the best filesystem for you, plus inside the Free Software Foundation, a swig of Elixir, Kodi 14.2 on a Pi and Syncthing.

LXFDVD highlights

Fedora 23 64-bit, Ubuntu 15.10 64-bit, Tails 1.7 and more.

LXFDVD D h highlights hl h

Mint 17.2 Cinnamon, OpenSUSE 13.2 KDE, Bodhi 3.1.0 and more.

To order, visit myfavouritemagazines.co.uk

Select Computer from the all Magazines list and then select Linux Format.

Or call the back issues hotline on 0844 848 2852 or +44 1604 251045 for overseas orders.

Quote the issue code shown above and have your credit or debit card details ready

GET OUR DIGITAL EDITION! SUBSCRIBE TODAY AND GET 2 FREE ISSUES*

Available on your device now

*Free Trial not available on Zinio.


Don’t wait for the latest issue to reach your local store – subscribe today and let Linux Format come straight to you.

“If you want to expand your knowledge, get more from your code and discover the latest technologies, Linux Format is your one-stop shop covering the best in FOSS, Raspberry Pi and more!” Neil Mohr, Editor

TO SUBSCRIBE Europe?

From only €117 for a year

USA?

From only $120 for a year

Rest of the world

From only $153 for a year

IT’S EASY TO SUBSCRIBE... myfavm.ag/LinuxFormat CALL +44 (0)1604 251045 Lines open 8AM-9.30PM GMT weekdays, 8AM-4PM GMT Saturdays Savings compared to buying 13 full-priced issues. This offer is for new print subscribers only. You will receive 13 issues in a year. If you are dissatisfied in any way you can write to us to cancel your subscription at any time and we will refund you for all un-mailed issues. Prices correct at point of print and subject to change. For full terms and conditions please visit myfavm.ag/magterms.

www.techradar.com/pro

April 2016 LXF209 67


Terminal Pick an emulator, learn the syntax and the help commands

Terminal: How to get started Nick Peers flexes his fingers and dives head first into the inky darkness of the terminal to show you how to start handling the commands.

T

Our expert Nick Peers

finds the terminal a safe, soothing dark cave to hide in after decades of working as a tech journalist.

he terminal is an incredibly important part of your Linux desktop. It doesn’t matter how wedded you are to point and click over the command line, at some point you’re going to have to dip your toe in the terminal’s dark expanse and use it. Don’t worry, though, because the terminal isn’t as scary as it might appear, and if you take the time to learn the basics you’ll discover it can be a far quicker and more effective way of getting certain tasks done. As you’d expect, a terminal effectively gives you access to your Linux shell, which means it works in exactly the same way using the same language (Bash). This means you can do anything in the terminal you’d normally do at the command line, all without leaving the relative comfort of your desktop. That makes learning how to use the terminal – and Bash – doubly advantageous as it gives you your first glimpse into working with the underlying Linux shell. And over the next few articles that’s exactly what you’re going to learn – how to get to grips with the terminal. We’re basing this tutorial on Ubuntu, so start by opening the Dash and typing ‘terminal’ into the search box. You’ll find the terminal of course, but you’ll also see two entries called UXTerm and XTerm too. This highlights the fact there are multiple terminal emulators that you can run in order to interact with the shell. There are differences between them, of course, but fundamentally they do the same thing. For the purposes of this tutorial we’re sticking with the default terminal, which is basically the gnome-terminal

Quick tip If you’re struggling to type the right command, you can wipe all previous actions from view simply by typing clear and hitting Enter. Note this won’t affect your command history.

68 LXF209 April 2016

The --help flag can be used with any command to find out what it does, plus what arguments to use.

www.linuxformat.com

emulator – technically it’s emulating a TeleTYpe (TTY) session. It has all the functionality you’ll need, but both XTerm and UXTerm are worth noting because although they are more minimalist tools and neither require any dependencies to run. This means if anything stops the main terminal from running, you can use either as a backup. As an aside, the only difference between the two is that UXTerm supports the expanded Unicode character set.

How Bash works

The Linux shell uses the Bash shell and command language to perform tasks, and it uses a relatively straightforward syntax for each command: utility command -option . The ‘utility’ portion of the command is the tool you wish to run, such as ls for listing the contents of a directory, or aptget to trigger the APT package management tool. The command section is where you specify exactly what you want the utility to do, eg typing apt-get install instructs the package management utility to install the named package, eg: apt-get install vlc . The -option section is where one or more ‘flags’ can be set to specify certain preferences. Each flag is preceded by

Speed up text entry It doesn’t matter how fleet of hand your typing skills are, the command line can still be a time-consuming, frustrating experience. Thankfully the terminal comes equipped with lots of handy time-saving shortcuts. This issue let’s take a look at how you can easily access previously used commands and view suggestions: Up/down arrows Browse your command history. history Use this to view your command history Ctrl+r Search command history. Type letters to narrow down search, with the most recent match displayed, and keep pressing Ctrl+r to view other matches. Tab View suggestions or auto-complete a word or path if only one suggestion exists. Press ~+Tab to autofill your username, @+Tab to autofill your host name and $+Tab to autofill a variable.


Terminal basics Tutorial Your first terminal commands While it’s possible to install and manage software using a combination of the Software Center and Ubuntu’s Software & Updates setting panel, it’s often quicker to make use of the Advanced Package Tool (APT) family of tools. Here’s some key ways that they can be used (see sudo use below): $ apt-cache pkgnames Lists all available packages from sources listed in the /etc/apt/sources.list file. $ sudo add-apt-repository ppa:<repository name> Adds a specific Launchpad PPA repository to the sources list. $ sudo apt-get update Gets the latest package lists (including updated versions) from all listed repositories. $ sudo apt-get install <package> Installs all the named package. This will also download and install any required dependencies for the packages. $ apt-get remove <package> Use this to remove an installed package. Use apt-get purge <package> to also remove all its configuration files, and apt-get autoremove to remove packages installed by other packages that are no longer needed. $ sudo apt-get upgrade Upgrades all installed software – run sudo apt-get update before running this. Other useful apt-get commands include apt-get check a diagnostic tool that checks for broken dependencies, apt-get autoclean , which removes Deb files from removed packages.

one or two dashes (--) and the most useful of all is the --help option, which provides a brief description of the utility, plus lists all available commands and options, eg ls -l . The -l flag tells the list directory tool to provide detailed information about the contents of the folder it’s listing, including: permissions; who owns the file; the date it was last modified; and its size in bytes. Utilities can be run without any commands or options – eg ls on its own provides a basic list of all folders and files in a directory. You can also run utilities with a combination of commands and/or options.

Restricted access

Open the terminal and you’ll see something like this appear: username@pc-name:~$ . This indicates that you’re logged on to the shell as your own user account. This means that you have access to a limited number of commands – you can run ls directly, eg, but not to install a package using apt-get , because the command in question requires root access. This is achieved one of two ways – if you’re an administrative user, as the default user in Ubuntu is, then you can precede your command with the sudo command, eg sudo apt-get install vlc . You’ll be prompted for your account password, and then the command will run. You should find that you can run more sudo -based commands without being re-prompted for your password (for five minutes) while the terminal is open. On some distros you can log on to the terminal as the root user with su – you’ll be prompted for the root password at which point you’ll see the following prompt: root@pc-name:~$ . Once logged in, you can enter commands with no restrictions. We recommend you use the sudo command rather than this approach and if you’re running Ubuntu then you’ll find su won’t work because the root account password is locked for security reasons. When installing some distros or adding new users to Ubuntu, you may find your user account isn’t added to the

The apt-cache package can also be used to search for specific packages or reveal a package’s dependencies.

sudo group by default. To resolve this, you need to open the terminal in an account that does have root access (or use the su command if supported) and type sudo adduser <username> sudo . You can also add the user to other groups with the command by listing all the groups you wish to add, eg: sudo adduser <username> adm sudo lpadmin sambashare . Another handy tool is gksudo, which allows you to launch desktop applications with root privileges. It’s of most use when wanting to use the file manager to browse your system with root access: gksudo nautilus . Make sure you leave the terminal open while the application is running, otherwise it’ll close when the terminal does. When you’re done, close the application window, then press Ctrl+c in the terminal, which interrupts the currently running program and returns you to the command line. We’ve already discussed the --help flag, but there are other help-related tools you can use too. First, there’s whatis – which you can type with any command to get a brief description of it and any specified elements, eg whatis apt-get install vlc will describe the apt-get tool, the install argument and what package vlc is. Flags are ignored. If you’re looking for a full-blown manual, then the man tool provides access to your distro’s online reference manual, which is started with man intro . This provides you with a long and detailed intro to the command line. Once done press q to quit back to the terminal. For more advice on navigating the manual, type man man or pair it with a tool, eg man ls . Now you’ve taken your first steps into the world of the terminal, check out the box (Your First Terminal Commands, above) for some useful package management commands you can work with. Next issue, we’ll look at how to navigate your filesystem from the terminal, plus launch programs and delve into more useful shortcuts to help speed up the way you interact with the command line. LXF

If you missed last issue Head over to http://bit.ly/MFMissues now! www.techradar.com/pro

April 2016 LXF209 69


Hardware Get Ubuntu working on a Windows x86 hybrid 2-in-1 device

Ubuntu: Linux on a tablet Nick Peers digs deep to discover how to successfully install a working version of Ubuntu on a Windows 2-in-1 tablet.

Our expert Nick Peers

recently bought himself a Linx 1010 tablet specifically for the purposes of getting Linux working on it. That’s either a sign of great dedication or stupidity for you.

A

Quick tip Ian Morrison has done a lot of hard work building a version of Ubuntu 14.04.3 LTS for Z3735f-powered devices like the Linx 1010. If you’d like him to develop his work further – we recommend donating through his website www. linuxium.com.au.

70 LXF209 April 2016

re you jealous of the sudden proliferation of cheap Windows 2-in-1 tablets? Wish you could run Linux on it instead? Spanish smartphone manufacturer, BQ, may be teaming up with Canonical to sell the Aquarius M10 tablet with Ubuntu pre-installed, but with the price tag expected to be north of £200, why pay more when it turns out you can – with a fair amount of tweaking – get Linux to install on one of those cheap Windows devices? These devices all use a low-end Intel Atom quad-core processor known collectively as Bay Trail, and we managed to source one such tablet, which we’ve made the focus of this tutorial. The device in question is a Linx 1010, which sports an Atom Z3735F processor, 2GB RAM, 32GB internal EMMC (plus a slot for additional microSD card), two full-size USB ports and a touchscreen with multi-touch support. It can be bought with detachable keyboard and trackpad through the likes of www.ebuyer.com for under £150. These devices come with Windows 10 pre-installed, but as you’ll discover, it’s possible to both run and install flavours of Linux on them. In a perfect world, you’d simply create a live Linux USB drive, plug it in and off you go, but there are a number of complications to overcome. First, these tablets pair a 64-bit processor with a 32-bit EFI – most distros expect a 64-bit

www.linuxformat.com

processor with 64-bit EFI, or a 32-bit processor with traditional BIOS, so they won’t recognise the USB drive when you boot. Second, while hardware support is rapidly improving with the latest kernel releases, it’s still not particularly comprehensive out of the box. But don’t worry – if you’re willing to live with reduced functionality for now (things are improving on an almost daily basis) you can still get Linux installed and running in a usable setup using a Bay Trail-based tablet. Here’s what you need to do. It pays to take a full backup of your tablet in its current state, so you can restore it to its original settings if necessary. The best tool for the job by far is a free Windows application called Macrium Reflect Free (www.macrium.com/ reflectfree.aspx). Install this on your tablet, then back up the entire disk to your tablet’s microSD storage before creating a failsafe Macrium USB bootable drive for restoring the backup if required. Note: The microSD slot can’t be detected by the rescue disc, so to restore your tablet to its default state you’ll need a USB microSD card reader, which can be detected by the Macrium software. With your failsafe in place, it’s time to play. While they’re very similar, Bay Trail tablets aren’t identical, so it’s worth searching for your tablet model and a combination of relevant terms ('Linux’, ‘Ubuntu’ and ‘Debian’ etc) to see what turns up. You’re likely to find enthusiasts such as John Wells (www.jfwhome.com), who has detailed guides and downloadable scripts to getting Ubuntu running on an Asus Transformer T100TA tablet with most of the hardware working. Another good resource is the DebianOn wiki (https://wiki.debian.org/InstallingDebianOn) where you’ll find many other tablets are featured with guides to what works, what issues to look out for and handy links and downloads for further information. Sadly – for us – there’s no handy one-stop shop for the Linx 1010 tablet, so we had to do a fair bit of experimenting before we found the best way forward for us (see Experimenting with Linux support, p72).

Install Linux on Linx

We decided to go down the Ubuntu route when it came to the Linx 1010 tablet. We’re indebted to the hard work of Ian Morrison for producing a modified version of Ubuntu (14.04.3 LTS) that not only serves as a live CD, but also works as an installer. We experimented with later Ubuntu releases – 15.10 and a daily build of 16.04 – but while the live distros work fine, installing them proved to be impossible. Still, all is not lost, as you’ll discover later on. So, the simplest and easiest way to install Ubuntu on your Z3735F-powered tablet is to use Ian’s


Ubuntu tablet Tutorial Hardware support What’s the current state of play for hardware support for Bay Trail tablets? It varies from device to device, of course, but there are differences. Here’s what you should be looking for when testing your tablet: ACPI This deals with power management. This is practically non-existent out of the box, but later kernels do tend to produce support for displaying battery status – the Linx appears to be the exception to the rule here. Suspend and hibernation should be avoided. Wi-Fi Later kernels again improve support, but many devices use SDIO wireless adaptors, which aren’t supported without patches or custom-built drivers like those found at https://github.com/hadess/rtl8723bs.

Bluetooth This often needs patching with later kernels, although our Linx tablet retained Bluetooth connectivity throughout, even when the internal Wi-Fi adaptor stopped working. Sound A problem on many tablets, and even if the driver is recognised and loaded, required firmware may be missing. Be wary here – there are reports of users damaging their sound cards while trying to activate them. Touchscreen As we’ve seen, older kernels don’t support them, but upgrading to kernel 4.1 or later should yield positive results, albeit with a bit of tweaking. Camera There’s been little progress made here so far. In most cases you’ll need to wait for drivers to appear.

Unofficial ‘official’ quasi Ubuntu 14.04.3 LTS release. This comes with 32-bit UEFI support baked in to the ISO, and includes custom-built drivers for key components including the Z3735F processor and the internal Wi-Fi adaptor. However, there’s no touchscreen support, so you’ll need to connect the tablet to a detachable keyboard and touchpad. Go to www.linuxium.com.au on your main PC and check out the relevant post (dated 12 August 2015, but last updated in December) under Latest. Click the ‘Google Drive’ link and select the blue ‘Download’ link to save Ubuntu-14.04.3desktop-linuxium.iso file to your Downloads folder. Once done, pop in a freshly formatted USB flash drive – it needs to be 2GB or larger and formatted using FAT32. The simplest way to produce the disk is to use UNetbootin and select your flash drive, browse for the Ubuntu ISO and create the USB drive. Once written, eject the drive. Plug it into one of the Linx’s USB ports, then power it up by holding the power and volume + buttons together. After about five seconds or so you should see confirmation that boot menu is about to appear – when it does, use your finger to tap ‘Boot Manager’. Use the cursor key to select the ‘EFI USB Device’ entry and hit Return to access the Grub menu. Next, select ‘Try Ubuntu without installing’ and hit Return again. You’ll see the Ubuntu loading screen appear and then after a lengthy pause (and blank screen) the desktop should appear. You should also get a momentary notification that the internal Wi-Fi adaptor has been detected – one of the key indications that this remixed Ubuntu distro has been tailored for Bay Trail devices. Up until now you’ll have been interacting with your tablet in portrait mode – it’s time to switch it to a more comfortable landscape view, and that’s done by click the ‘Settings’ button in the top right-hand corner of the screen and choosing System Settings. Select ‘Displays’, set the Rotation drop-down menu to ‘Clockwise’ and click ‘Apply’ (the button itself is largely off-screen, but you can just make out its left-hand end at the top of the screen as you look at it). Next, connect to your Wi-Fi network by clicking the wireless button in the menu bar, selecting your network and entering the passkey. You’re now ready to double-click ‘Install Ubuntu 14.04.3’ and follow the familiar wizard to install

Upgrade the kernel to 4.1 or later to make Ubuntu touch-friendly on your tablet.

Ubuntu on to your tablet. You’ll note that the installer claims the tablet isn’t plugged into a power source even though you should have done so for the purposes of installing it – this is a symptom of Linux’s poor ACPI support for these tablets. We recommend ticking ‘Download updates while installing’ before clicking ‘Continue’, at which point you’ll probably see an Input/output error about fsyncing/closing – simply click ‘Ignore’ and then click ‘Yes’ when prompted to unmount various partitions. At the partition screen you’ll see what appears to be excellent news – Ubuntu is offering to install itself alongside Windows, but this won’t work, largely because it’ll attempt to install itself to your microSD card rather than the internal storage. This card can’t be detected at boot up, so the install will ultimately fail. Instead, we’re going to install Ubuntu in place of Windows, so select ‘Something else’. Ignore any warning about /dev/sda – focus instead on /dev/mmcblk0, which is the internal flash storage. You’ll see four partitions – we need to preserve the first two (Windows Boot Manager and unknown) and delete the two NTFS partitions (/dev/mmcblk0p3 and /dev/mmcblk0p4 respectively). Select each one in turn and click the ‘-’ button to delete them.

Quick tip While it may be tempting to upgrade the kernel all the way to the current release (4.4.1 at time of writing) you may run into issues with your touchpad. For now, stick to kernel 4.3.3 until these problems are ironed out.

You can create your Ubuntu installation media from the desktop using the UNetbootin utility – it’s quick and (in this case) works effectively.

For print and digital subs See www.myfavouritemagazines.co.uk/linsubs www.techradar.com/pro

April 2016 LXF209 71


Tutorial Ubuntu tablet

Make sure you manually set up your partitions when prompted – you need to preserve the original EFI partition.

Next, select the free space that’s been created (31,145MB or thereabouts) and click the ‘+’ button. First, create the main partition – reduce the allocation by 2,048MB to leave space for the swap partition, and set the mount point to ‘/’, but leave all other options as they are before clicking ‘OK’. Now select the remaining free space and click ‘+’ for a second time. This time, set ‘Use as’ to ‘swap area’ and click ‘OK’. Finally, click the ‘Device for bootloader installation’ dropdown menu and select the Windows Boot Manager partition before clicking ‘Install Now’. The rest of the installation process should proceed smoothly. Once it’s finished, however, don’t click ‘Continue testing or Reboot now’ just yet. First, there’s a vital step you need to perform in order to make your copy of Ubuntu bootable, and that’s install a 32-bit version of the Grub 2 bootloader. The step-by-step walkthrough (see bottom, p73) reveals the simplest way to do this, courtesy of Ian Morrison’s handy script.

Hardware compatibility

Once you’ve installed Ubuntu and rebooted into it for the first time, you’ll once again need to set the desktop orientation to landscape via Screen Display under System Settings. Now

open Firefox on your tablet and download two more scripts from http://bit.ly/z3735fpatch and http://bit.ly/z3735fdsdt respectively. Both improve the hardware support for devices sporting the Linx 1010’s Z3735F Atom chip, and while they don’t appear to add any extra functionality to the Linx, they do ensure the processor is correctly identified. You need to chmod both scripts following the same procedure as outlined in step 2 of the Grub step-by-step guide (see bottom, p73), then install them one after the other, rebooting between each. Finally, download and install the latest Ubuntu updates when offered. You’ll notice the login screen reverts to portrait mode when you first log in – don’t worry, landscape view is restored after you log in, and you can now review what is and isn’t supported on your tablet. In the case of the Linx 1010, not an awful lot is working at this point. There’s no ACPI support, the touchscreen isn’t detected, and there’s no camera support or sound (although the sound chip is at least detected). The internal Wi-Fi is thankfully supported, as are the USB ports, Bluetooth, keyboard/trackpad and internal flash. Later versions of the kernel should improve compatibility – this is why we were keen to see if we could install Ubuntu 15.10 or 16.04 on the Linx. We were thwarted in this respect – touch support is present, but we had to manually add the bootia32.efi file to the EFI\Boot folder to get the live environment to boot, and installation failed at varying points, probably due to the spotty internal flash drive support. We’re hoping the final release of 16.04 may yield more possibilities, but if you can’t wait for that and are willing to run the risk of reduced stability read on. If you’re desperate to get touchscreen support for your tablet, and you’ve got a spare USB Wi-Fi adaptor handy (because updating the kernel breaks the internal Wi-Fi adaptor), then upgrade your kernel to 4.1 or later. We picked kernel 4.3.3 – to install this, type the following into a Terminal: $ cd /tmp $ wget \kernel.ubuntu.com/~kernel-ppa/mainline/v4.3.3-wily/ linux-headers-4.3.3-040303_4.3.3-040303.201512150130_all.

Experimenting with Linux The only other distro we were able to install successfully on the Linx 1010 tablet was Debian Jessie (8.3). It’s unique in that both 32-bit and 64-bit versions work with 32-bit UEFI without any modification, but there’s no live support: you’ll have to install it direct to the hard drive. Wi-Fi support isn’t provided out of the box – we had to add a non-free firmware package to the USB flash drive to get our plug-in card recognised. Hardware support was minimal, although upgrading to kernel 4.2 did at least allow the internal Wi-Fi adaptor to be recognised. Elsewhere we tried the Fedlet remix of Fedora (http://bit.ly/fedora-fedlet) as a live USB, but had to use a Windows tool (Rufus) to create the USB flash drive in order for it to boot. Performance was extremely sluggish, and the internal Wi-Fi adaptor wasn’t recognised. Touch did work, however. We also had success booting from a specialised Arch Linux ISO that had SDIO Wi-Fi and 32-bit UEFI support. You can get this from http://bit.ly/arch-baytrail, but stopped short of installing it. We also got a version of Porteus up and running from http://build.porteus.org with a lot of fiddling, but the effort involved yielded no better results than anything else we tried.

72 LXF209 April 2016

Setting the issues with the Wi-Fi adaptor aside, installing Debian was a reasonably straightforward process on our Linx 1010 tablet.

www.linuxformat.com


Ubuntu tablet Tutorial deb $ wget kernel.ubuntu.com/~kernel-ppa/mainline/v4.3.3-wily/ linux-headers-4.3.3-040303-gener ic_4.3.3-040303.201512150130_amd64.deb $ wget \kernel.ubuntu.com/~kernel-ppa/mainline/v4.3.3-wily/ linux-image-4.3.3-040303-gener ic_4.3.3-040303.201512150130_i386.deb $ sudo dpkg -i linux-headers-4.3*.deb linux-image-4.3*.deb Once complete, reboot your tablet. You’ll discover you now have touch support at the login screen (this is single touch, not multi-touch), but once you log in and the display rotates you’ll find it no longer works correctly. We’ll fix that shortly. First, you need to be aware of the drawbacks. You’ll lose support for the internal SDIO wireless card (we had to plug in a spare USB Wi-Fi adaptor to get internet connectivity back) and the sound is no longer recognised. There may also be issues with stability that you can fix with a rough and ready workaround by configuring Grub: $ sudo nano /etc/default/grub Look for the line marked GRUB_CMDLINE_LINUX_ DEFAULT and change it to this: GRUB_CMDLINE_LINUX_DEFAULT="intel_idle.max_ cstate=0 quiet” Save your file, exit nano and then type: $ sudo update-grub Reboot, and you’ll reduce the potential for system lockups, but note the kernel parameter increases power consumption and impact on battery life, which is a shame because the ACPI features still don’t work, meaning that the power settings remain inaccurate: battery life is always rated at 100%, even when it’s clearly not.

Fix the touchscreen

Moving on, let’s get the touchscreen working properly. First, identify its type using xinput . In the case of the Linx 1010, this reveals it has a Goodix Capacitive TouchScreen. What we need to do is instruct the touchscreen to rotate its matrix when the display does, which means it’ll work in both portrait

and landscape modes. You can do this using xinput : xinput set-prop “Goodix Capacitive TouchScreen” ‘Coordinate Transformation Matrix’ 0 1 0 -1 0 1 0 0 1 You should now find the touchscreen works correctly in horizontal landscape mode. As things stand, you’ll need to apply this manually every time you log into Ubuntu, while the touchscreen won’t work properly if you rotate back to portrait mode. If you want to be able to rotate the screen and touchscreen together, then adapt the rotate-screen.sh script at http://bit.ly/RotateScreen (switch to Raw view, then right-click and choose ‘Save page as’ to save it to your tablet). Then open it in Gedit or nano to amend the following lines: TOUCHPAD='pointer:SINO WEALTH USB Composite Device’ TOUCHSCREEN='Goodix Capacitive TouchScreen’ Save and exit, then use the script: $ ./rotate_desktop.sh <option> Substitute <option> with normal (portrait), inverted, left or right to rotate both the screen and touchscreen matrix. Before using the script, you need to first undo the current screen rotation using Screen Display – restore it to its default view, then run ./rotate_desktop.sh right to get touchpad and touchscreen on the same page. From here we suggest creating a startup script: open dash and type startup , then launch Startup Applications. Click ‘Add’. Type a suitable name etc to help you identify it, click ‘Browse’ to locate and your select your script – when done, click inside the ‘Command’ box and be sure to append right to the end of the script. Click ‘Save’, reboot and after logging in you should find your tablet and touchscreen now work beautifully with your plug-in keyboard and touchpad. You’ve now successfully installed Ubuntu on your Bay Trail tablet. What next? Keep an eye out for the latest kernel updates and forums to see if entrepreneurial folk have found the workarounds and tweaks required to get more of your tablet’s hardware working properly. As for us, we’re off to see if we can get the internal sound and Wi-Fi working again before turning our attention to the ACPI settings… LXF

Quick tip Open Settings > Universal Access > Typing tab and flick the ‘On Screen Keyboard’ switch to On to have it start automatically with Ubuntu. Next, open Onboard Settings via the dash and tick ‘Start Onboard Hidden’, plus tweak the keyboard to your tastes. Now you’ll have easy access to the touch keyboard via the status menu.

Install 32-bit Grub bootloader

1

Download install script

When Ubuntu has finished installing to your tablet, make sure the dialogue asking if you’d like to continue retesting or restart your PC is kept on-screen. Now open the Firefox browser and navigate to http://bit.ly/grub32bit, which will redirect to a Google Drive download page. Click ‘Download’ to save the linuxium32bit-patch.sh to your Downloads folder.

2

Install script

The linuxium-32bitpatch.sh file is a script that automates the process of installing the 32-bit version of the Grub 2 bootloader. Now you’ll need to press Ctrl+Alt+T and type the following commands: $ cd Downloads $ chmod 700 linuxium-32bit-patch.sh $ sudo ./linuxium-32bit-patch.sh

www.techradar.com/pro

3

Reboot PC

You’ll see a series of packages are downloaded and installed automatically, which will basically allow your tablet’s 32-bit UEFI to recognise the Grub bootloader, and allow Ubuntu to load automatically at startup. Click the ‘Restart Now’ button to complete the process and wait while your PC reboots into Ubuntu proper for the first time.

April 2016 LXF209 73


Gentoo Add and create your own RYHUOD\V WR SRUWDJH DQG µQG VRIWZDUH

Portage: Using sets & overlays Neil Bothwick delves deeper into managing Gentoo and Portage.

Our expert Neil Bothwick

has a great deal of experience with booting up, as he has a computer in every room, but not as much with rebooting since he made the switch to Linux.

There are a lot of overlays available. The ones marked with a green asterisk are officially supported.

I

n issue LXF208 we looked at getting more out of Portage, Gentoo’s package manager. We covered some of the basics and now we’ll look at a more features to help you make the most of Gentoo’s flexibility. The portage tree contains a lot of software, at the time of writing there are 39,394 ebuilds for 19,120 packages (most packages have more than one version in the tree). That’s a lot, but it’s not everything. Occasionally, you’ll want to install something that’s not in the portage tree and this is where overlays come in. An overlay is just an extra repository (repos) to supplement the portage tree, similar to a PPA in Ubuntu. One difference is that it’s easy to add your own overlay. If you

have an ebuild for a package – possibly downloaded from Gentoo’s Bugzilla site or provided by the software – you can add it to your local overlay to make it available to Portage. An overlay is simply a directory following the same layout as the portage tree. Create a directory – the usual default is /usr/portage/local but it can be anywhere on your hard drive or network – then create the file /etc/portage/repos. conf/local.conf containing: [local] location = /usr/portage/local priority = 100 auto-sync = No The priority setting is used when more than one overlay, including the main tree, contains the same version of a package. The auto-sync setting determines what happens when you run emerge --sync , in the case of a local overlay there’s nothing to do. The directory structure of an overlay is the same as the main tree and must use the same category directories, with a sub-directory for each package then the ebuild names after the software and version number. So if someone has given you an ebuild for foo 1.0, this would be saved as /usr/portage/local/app-misc/foo-1.0.ebuild. If you look in the main portage tree, you’ll see a Manifest file

Sets We mentioned sets last month, in the context of @system and @world, but there are others. If you install a new kernel, any third party modules (such as Nvidia and some wireless drivers) also need to be rebuilt. Any such drivers are added to a set when they are first

74 LXF209 April 2016

installed, so you can rebuild them all with $ emerge @module-rebuild Similarly, @x11-module-rebuild does the same for any X drivers after updating XOrg. You can also define your own sets, which is useful if you want to install the same applications on several machines.

www.linuxformat.com

Create a file in /etc/portage/sets containing a list of the packages. We have a set called base that includes the various programs that we always want on a new system, like eix and conf-update. We copy this file to a new install and then emerge @base to install them all.


Portage Tutorial for each package, this contains things like checksums for checking for tampering. To generate one for your ebuild, run HEXLOG XVU SRUWDJH ORFDO DSS PLVF IRR HEXLOG PDQLIHVW One of the things you can often do with an overlay is bump an ebuild when a new version of the software is released but not yet in the portage tree. So if IRR is in the tree but the new IRR is not, you can generally do this: FS D XVU SRUWDJH DSS PLVF IRR XVU SRUWDJH ORFDO DSS misc PY XVU SRUWDJH ORFDO DSS PLVF IRR HEXLOG XVU SRUWDJH ORFDO DSS PLVF IRR HEXLOG HEXLOG XVU SRUWDJH ORFDO DSS PLVF IRR HEXLOG PDQLIHVW HPHUJH D IRR While creating your own overlay is handy, there are already plenty of overlays ready for you to add. You manage them with the layman tool, which you may have to emerge first. You can get a list of overlays with $ layman --list and add one to your computer with $ layman --add overlay-name . To list installed overlays use $ layman --list-local . When you add an overlay, an entry is added to /etc/portage/repos.conf/ layman.conf and the contents are downloaded to your system, and kept up to date when you run emerge --sync . Some overlays provide extra packages, others are for testing new versions before they go into the tree, eg you can try the latest beta of Gnome or KDE overlay. Those overlays are among the official ones, subject to similar quality control as the main tree, but layman also has many unofficial overlays. It will warn you when you try to add an unofficial overlay. There are other overlays out there that have not been added to layman, to add one of these, create a file in /etc/ portage/repos.conf, as you did for your local overlay.

Finding software

So there’s a wealth of software in the main portage tree, plus all these overlays available, so how do you find what’s available and from where? The emerge --search command does what its name implies, searches the portage tree – but very slowly. One of the first Portage utility programs you should install is eix, which we mentioned briefly last month. This uses a database of available software, which you should keep up to date by running eix-update after you sync. This is more than a faster emerge --search , if you then run $ eixremote update it will download and add a database of the contents of the overlays in layman. Now you can search the portage tree with HL[ IRR or the tree and all overlays with HL[ 5 IRR . The eix utility has many more options, which unfortunately means the man page is hard work. The important ones are: -c for a compact output with one line per package, -A to search category names, -S to search package descriptions and -s to search package names. The last one is the default if you don’t specify any of the others. So to search for any packages that have apache in the name or description, and give a compact output because there are many, you would use $ eix -csS apache . The search string you give is treated as a regular expression by default, add -e if you only want exact matches. The output from eix tells you: which versions are available; which USE flags are available; which version is installed; and the USE flags set for it; along with the installation date – all of which are useful for troubleshooting. One area that can trip up new users is managing configuration files. Let’s say you have foo-1.0 installed (yes,

Mixed arch Gentoo effectively has two portage trees in one: the stable and testing trees. If you use the stable tree, by setting ACCEPT_KEYWORDS=amd64 in make.conf, you will get the tried and tested ebuild rather than the very latest. Using ~amd64 gives the testing ebuild. However, it’s possible to mix the two to a certain extent. Let’s say that you run a

stable system but need the latest version of our old friend foo. Just add the package atom to /etc/portage/ package.accept_keywords. As with package.use – and all similar files – this can either be a single file or a directory full of them. Running a mixture of testing and stable can cause issues, so use this sparingly.

that program gets a lot of use) and foo-1.1 becomes available and is installed when you do a world update. The foo-1.0 program has a config file at /etc/foo.conf, which you’ve carefully tuned to your needs. But foo-1.1 has a different config file because it has some new options. If Portage left your old file in place, these wouldn’t be set but if it installed the new defaults you’d be unhappy. So it installs the new file as /etc/._cfg0000foo.conf and prints a warning at the end of the emerge process that config files need updating. The default tool for this task is etc-update, which gives you a list of config files to update and enables you to view the difference between them and choose which to use or edit. Alternatives include dispatch-conf, which is also part of Portage; cfg-update, which has an optional X interface; and conf-update, our choice. Whichever you use, it’s important that you act on those messages about updating configuration files, ignoring them is asking for trouble. The main portage configuration file is /etc/portage/make. conf, which you’ll have edited during installation. You can change the paths that portage uses in here, eg many people think storing large amounts of data in /usr is wrong, so change DISTDIR to a different path to avoid filling /usr/portage/ distfiles with all your source downloads. If you run several Gentoo computers on a network, consider making this an NFS share, so that only the first system needing a file has to download it. The FEATURES setting controls some other options. If you add buildpkg to the list, Portage will save a binary package of the files it installs, which you can then install with emerge -k package . This makes rolling back to an older version much easier should the new one prove troublesome. LXF

Conf-update – and other config managers – show you the changes made to files so you can accept, reject or modify them.

If you missed last issue Call 0844 848 2852 or +44 1604 251045 www.techradar.com/pro

April 2016 LXF209 75


Octave Learn how to install it and start making 2D and 3D plots and graphs

Octave 4: Start your plotting Afnan Rehman delves into manipulating and visualising data using the – now standard – graphical user interface in Octave 4 and its new features.

Our expert Afnan Rehman

loves tinkering, fiddling and being a general nuisance when it comes to Linux and other systems. He now gets to write about his efforts.

The Octave GUI, first introduced in version 3.8, is now the normal user interface, and replaces the aging commandline one and making it more usable for the Average Joe/Jill.

distributions (distros) include it within their software repositories (repos), which makes the process fairly simple. If you’re like me and want to mess around with building from source, go ahead and grab the source files in a compressed download form the official GNU Octave page at www.gnu. org/software/octave.

T

Quick tip When installing, make sure your Linux distro is fully up to date and that any previous version of Octave is first uninstalled using the yum remove command.

76 LXF209 April 2016

here comes a time in a man’s life where he must ask: to calculate or not to calculate? If you’re anything like the author, with an engineering final project due in the next few hours and three cups of coffee sitting on your desk, the answer is yes. That’s when you turn to Octave, a beautiful piece of software that will turn a mass of messily scrawled paper-napkin equations into a visual feast. Octave is an open-source program developed out of the need to be able to manipulate data and visualise it in a functional, yet easy to use environment. Based on the GNU Octave language, it’s very similar to the popular Matlab, but without the hefty price tag. Back in December 2013, Octave 3.8.0 was released, and with it, an experimental – but much needed – graphical user interface (GUI) that brought Octave closer to becoming an easy-to-use program. Enter Octave 4.0.0, released in May 2015, with the now official and finished GUI that’s part of the software and no longer needs to be manually started after the program is booted up. This version has also brought a lot of improvements to Matlab compatibility, which now ensures that your programs will work in both Matlab and Octave. How do I get this nifty piece of software you ask? Some Linux

www.linuxformat.com

Compile Octave

We’ll be building from source using CentOS 6.6, which is a fairly barebones platform for tinkering. It comes with Octave version 3.4.3 already but we aren’t interested in the old stuff. With our freshly downloaded TAR.GZ compressed file we need proceed to decompress the folder to get at the files inside. This can be done using the command-line terminal with the tar filename.tar.gz command. Most distros will also come with an archive manager of some sort that will extract the files for you. After this step navigate your way to where the Octave files are located using the cd command. Once you’re in the main folder, use the ./configure command to generate a configure file which will assist in building the program. If there are any errors brought up during this time, make sure you have all necessary software packages and libraries installed. The command line will tell you what’s missing. We found that in CentOS quite a few libraries, such as BLAS (Basic Linear Algebra Libraries) and LAPACK (Linear Algebra PACKage), aren’t included in the base installation, and must be added manually using either the yum command or the Add/


Octave 4.0 Tutorial Remove Software tool. Once that process finishes, type make to start the building process. This will take a while so have two standard Linux Format Cups of TeaTM while you wait. It took our computer over half an hour at this step (so we watched a bit of telly as well). Once that finishes, the last thing to type is make install . This will finish fairly quickly and conclude the build and installation process. Congratulations, you are now able to use Octave 4.0. Now that we have it installed, let’s get a quick tour of the GUI, one of the main highlights of the new software. It’s very similar to the experimental GUI found in version 3.8; the command window takes up a majority of the screen in the default interface layout. As with previous versions it also includes the familiar ‘no warranty’ claim from the creator right underneath the copyright. On the left you’ll find the file browser, workspace, and command history tabs. The workspace tab provides details on all the variables currently being used in whichever script you’re running. It can be useful in troubleshooting your program as you can see variable values at each step of the program. Command History is also a useful tab that can show you what commands were run previously, which is great for helping troubleshoot a script or function.

The basics

Now that you know your way around, it’s time to learn the fundamentals. At its most basic, this program can be used as a simple calculator. Typing something like 10+20 will yield 30 in the command window as soon as you hit Enter. You can also assign values to a variable and use those variable to do slightly more complex calculations, eg: >> a = 14/2 >> a = 7 >> b = 13/8 >> b = 1.6250 >> a – b >> ans = 5.3750 You may notice here that the command window outputs the value of every variable you’re inputting. While this doesn’t really affect the final result, it creates unnecessary clutter. To stop this from occurring, simply add a semi-colon to the end of the variable assignment to suppress the output. Octave also provides many of the same mathematical operators that you will find in Matlab or other mathematicsoriented languages, such as sin, cos, tan, max, min and pi etc. You can always look up a full list online or simply by typing help log in the command line. But where Octave really excels is in the area of matrices and vectors. Creating a vector

Using the surf command will alter your 3D plot output to include a nice coloured surface instead of a mesh.

is as simple as writing a sequence of numbers separated by spaces and enclosed in square brackets. To make a matrix with multiple rows, you would separate the terms of each row with a semicolon, eg >> [1 2 ; 3 4] . This creates a 2x2 matrix with the top row being comprised of the numbers 1 and 2 and the bottom row with 3 and 4. However, sometimes you need to make larger sets of data for your calculations. The command linspace() is a useful tool here where you can specify the first and last numbers in the array, as well as the number of elements you want, and Octave will generate a vector of that length with evenly spaced out terms. These vectors and matrices can also be used in mathematical operations. Octave mostly performs these operations term by term. Meaning that if you were to ask it to multiply two different vectors, provided they are the same size, Octave would multiply the first term of the first vector with the first term of the second vector, then the second term of the first vector with the second term of the second vector and so on. The key difference syntactically between multiplying two terms and two vectors or matrices is the period symbol, which should be added before the operator symbol whenever you’re dealing with vector or matrix multiplication. This we’ll illustrate with the following example: >> x = [1 2 3]; >> y = [4 5 6]; >> x .* y >> ans = [4 10 18] Now on to how to display your output. For simple output, such as the example above, simply not suppressing the expression and having the result automatically displayed might be fine way to do it. However, as you grow more adept

Quick tip When writing scripts, it’s a good idea to add comments in your code which explain what you’re doing. It helps both you and others understand the code better.

Octave and Matlab We’ve talked a lot about Octave’s compatibility with Matlab, but what specifically is compatible and what is different? Exactly how much can you do before you find yourself hitting the wall that differentiates these two applications? First, let’s talk about compatibility. Octave was built to work well with Matlab, and hence shares many features with it. Both use matrices as a fundamental data type; have built-in support for complex numbers; and both have powerful built-

in math functions and libraries. A vast majority of the syntax is interchangeable, and often opening a script file in one program will yield the same result as opening it in the other. There are some purposeful yet minor syntax additions on the Octave side that could potentially cause issues. One of those being that in Octave, the " symbol can be used just the same as the ' symbol to define strings. Another example is how code comments can be initiated

by the # character as well as the % character in Octave. Beyond these minor differences there are slightly more influential changes, such as the presence of the do-until loop, which is currently doesn’t exist in Matlab. On the other hand, Matlab includes many functions that aren’t present in the barebones Octave, and will throw up a quite worrying unimplemented function error message when Octave has no knowledge of what to do with the function.

We’re #1 for Linux! Subscribe and save at http://bit.ly/LinuxFormat www.techradar.com/pro

April 2016 LXF209 77


Tutorial Octave 4.0

at using Octave and perform more complex calculations, suppressing output becomes absolutely necessary. This is where the disp and printf commands come in. The disp command is the most basic display command available, and it will simply output the value of whichever variable you tell it to, in the same way as if you had simply typed the variable into the command window. However, the name of the variable isn’t displayed. The printf command is considerably more powerful than disp and also a bit more complicated to use. With printf you can define exactly what the output should look like, and you can specify things such as: the number of digits to display; the format of the number; the name of the variable; and even add output before or after the variable, such as a string of text. A printf command and its output would look like the following (assuming x = 40): >> printf(‘The answer is: %i’, x); >> The answer is 40

Scripts and functions

Quick tip To get a full list of the available style commands for graphs, go to http:// bit.ly/Octave GraphStyles.

Now that we know some of the basics, we can go on to the more advanced things that Octave can do. Unlike a basic calculator, we can write scripts in Octave to perform different functions, which is somewhat like basic programming where you can ask for inputs and perform tasks and print results to the command window. Much like its expensive cousin, Matlab, Octave can use scripts and functions to create a program that can run calculations on its own, using user input or even reading text files to accomplish the task. By clicking on the ‘New Script’ button on the top-left corner in the main window you can open up a fresh script file where you can write a program that can run independently of the user, meaning that you won’t need to retype every command when you want to calculate something. Script files are saved in .m format which ensures further compatibility with Matlab. This way anything created in Octave can be used in Matlab and vice-versa. The script file will be blank the first time you use it and located in a tab labelled ‘editor’. You’ll find a number of options

across the top of the file for running, saving and editing the scrip. When you start typing your commands you’ll notice that the editor also helpfully colour codes many of the entries for you in a similar way to a good text editor. Now that you’re familiar with the layout and basic functions, what can you do exactly? You can do many of the things that you could do in another programming language, such as Java or C, with ease using this math-oriented language. You can use for-loops and while-loops to take care of repetitive tasks, such as generating number arrays; assigning variables; prompt the user for input; and making decisions with if/else statements etc. One of the many things you could try to make with your newfound scripting ability is create a handy little tool for calculating a y-value using a supplied equation and a given x-value. X = 3.2; Y = 3*x+24; disp(Y); And just like that, we’ve calculated the value of Y (which is 33.6 in case you were curious) and used the display command to print it in the command window. Of course, this is a fairly simple example but Octave provides a host of tools to create more complex scripts and functions, such as case; do-until; break; continue; and all of the mathematical operators for adding, subtracting, multiplying, dividing; as well as roots and exponentials. This includes all the usual boolean and comparison operators such as <, <=, >, >=, ==, !=, && and || etc. Functions are files that contain mathematical expressions and are usually a separate snippet of code that can be called by the main script file. For those familiar with programming, it works much like a method or function in the Java programming language. In Octave, functions typically use the following syntax: Function [return value 1, return value 2, …] = name([arg1,arg1,…]) body code endfunction To call the function in a script file, we’d add the following line: value1= name(7); . In this line, 7 would function as the input variable for the function and whatever value is returned is assigned to the variable value1 . Functions can help organise the code in a way that makes it easy to understand, easy to modify and a lot shorter in many cases.

Graphs, graphs everywhere

Plots can be coloured differently and have different lengths on both the x and y axes. You can also include a graph legend using the legend() command.

Now on to the fun stuff: graphing! We can draw many pretty pictures using the built-in FLTK (Fast Light ToolKit) graphics toolkit. FLTK replaced gnuplot as the primary graphics library due to the better way it scales and rotates 3D plots; a useful feature when trying to visualise data. Plots that are 2D are fairly simple and often only require an x and y data set of some form and the plot function. Inputting arrays (such as below) will net you a simple line plot with the slope of 0.5: >> X = [0:0.5:200]; >> Y = [0:0.25:100]; >> plot(x,y); Note: see how useful the semi-colon suppress output feature is. If you forget, it will cause all 400 or so terms in the

Never miss another issue Subscribe to the #1 source for Linux on page 30. 78 LXF209 April 2016

www.linuxformat.com


Octave 4.0 Tutorial

The key to success Whether it is life or Linux, the success of one is built using the strength of many. Beyond the scope of this article there are many resources that can help you in advancing your knowledge of the Octave software. The full documentation is available at https://www.gnu.org/software/ octave/doc/interpreter. This guide, published by the creator of the software, John Eaton, will cover almost everything you need to know to

push the software to its fullest potential. The only drawback to the manual is that while it’s comprehensive, it is also very dense at times. For a more beginner-friendly set of tutorials, check out the Wikibooks tutorial at http://bit. ly/WikibooksOctave. Another helpful guide is by TutorialsPoint at http://bit.ly/MatlabOctaveTuturial. Although this guide seems specific to Matlab, almost all of

If you wish, you can start up Octave in the old-fashioned command-line only mode by heading over to the terminal and typing octave --no-gui which will turn off the GUI

array to appear in your command window. Of course, it’s also possible to plot multiple 2D graphs at once by just separating the parameters by a comma, eg: >> plot(x,y,x2,t2); . This piece of code (above) will allow you to create two graphs of different sets of x and y values. Note: This can also create graphs of different lengths. If you want to open multiple windows for different plots, precede each plot() command with figure’ ; ’. To differentiate between the different graphs, you can make them different colours and styles by adding parts to the plot command. You can even add commands to add a graph title, and axis labels. Take a look at the example below: >> plot(x,y,’r-s’,x2,y2,’b-+’); title(‘Graph Name’); xlabel(‘xaxis’); ylabel(‘y-axis’); This string of commands, separated by commas, first plots the graph, stylises the first graph as a red line with square data points and the second as blue with ‘+’ sign data points, and then adds a title and axis labels. These commands are also usable in the exact same form in Matlab.

the syntax carries over to Octave and the tutorials do address where there may be incompatibility. Although the information in these various resources may overlap, each has its own examples, and having a variety of different examples will be really helpful for beginners wanting to grasp the different ways a particular command or code snippet can be used to achieve a desired result.

You could also try the ngrid function, which enables you to plot in n number of dimensions, rather than being limited to two or three. A typical command structure for a meshgrid graph would be as follows: >> [x,y] = meshgrid(-20:0.5:20); >> z = x.*exp(-x.^2-y.^2); >> mesh(x,y,z); This will produce a 3D mesh plane but will be transparent between the lines. The meshgrid command here is creating a 2D plane in the first line. The second line defines a function for z, and the third graphs the 2D ‘mesh’ we created earlier with respect to the z function. To fill in the gaps, you can follow the above commands with the following surf command: >> surf(x,y,z); . Well that covers all of the basics that one would need to know in order to get started with Octave. This knowledge can also be applied to Matlab or other similar programs, depending on what software you use to accomplish your task. However, there’s much more to Octave’s capabilities than what we covered in this tutorial. Learning how to use Octave to its fullest potential would take a lot more than a four-page tutorial after all. To learn more about Octave and how it can be used in solving linear systems, differential equations, polynomials and other applications, you can check out the vast number of tutorials and documentation and the best place to start is here: http://bit.ly/OctaveSupport. LXF

Using 3D plots

3D plots are a little tougher. The simplest form of 3D line plots are accomplished using the plot3 function, eg the code below would produce a 3D helix graph: >> z = [0:0.05:5]; >> plot3(cos(2*pi*z), sin(2*pi*z), z); Using scripts and functions, you can shorten this process to just inputting the values for x y and z. However, what if we want to create those fancy 3D plane graphs? The meshgrid command, of course! This command creates matrices of x and y coordinates to help in plotting data on the z axis.

The capabilities of the recently-implemented FLTK graphics libraries means you can now rotate and resize 3D plots easily, such as the famous ‘Octave Sombrero’, shown here.

www.techradar.com/pro

April 2016 LXF209 79


Raspberry Pi Build a Pi-powered media FHQWUH IRU \RXU PXVLF DQG µOP FROOHFWLRQ

OpenELEC: Media streamer Nick Peers demonstrates how to build your own smart-streaming stick using a Raspberry Pi for both personal and internet media.

Our expert Nick Peers

has spent the last 18 months finetuning his Pi 2-powered media server setup for easy access to his ever-growing DVD collection. It’s starting to get messy and fixated on Doris Day.

OpenELEC runs an initial configuration wizard to get you connected – you’ll need a USB hub if you have a Pi Zero and want network capabilities.

builds for jailbroken Apple TV mark 1 boxes (head to http://chewitt.openelec.tv/appletv) as well as AMLogicbased hardware (http://bit.ly/amlogic-oe).

Choose your hardware

Quick tip Take the time to check out OSMC (http://osmc. tv). It’s another Kodi-based distro optimised for smaller devices (including Pi), and comes with its own custom, minimalist skin. There’s little difference in performance, and while OSMC is simpler to use out of the box, OpenELEC provides a more Kodi-like experience.

80 LXF209 April 2016

W

hy fork out for an expensive set-top box when you can build your own for significantly less? Thanks to the powerful open-source Kodi media centre software (https://kodi.tv), you can access both locally stored personal media on demand, plus watch a wide range of internet streaming services, including catch-up TV. The success of Kodi – formerly known as XBMC – has led to the development of Kodi-flavoured distributions (distros). If you’re looking for a full-blown Ubuntu-based distro with Kodi sitting on top then Kodibuntu (http://kodi.wiki/view/ Kodibuntu) will appeal. Kodibuntu is overkill for most people’s needs, which is where OpenELEC (www.openelec.tv) comes in. This is an embedded OS built around Kodi, optimised for less powerful setups and designed to be as simple to run and administer as possible. There’s an underlying OS you can access via SSH, but for the most part, you can restrict yourself exclusively to the Kodi environment. Four official builds are currently available: ‘generic’ covers 32-bit and 64-bit Intel, Nvidia and AMD graphic setups; two Raspberry Pi flavours: one for the Pi 2, and the other for everything else, including the new Pi Zero; and one final build is for Freescale iMX6 ARM devices. There are further unofficial

www.linuxformat.com

The cheapest way to build an OpenELEC streaming box from scratch is to base it around the Raspberry Pi Zero. There’s one slight complication caused by the fact it only has one USB port, so you’ll need a powered hub to support both keyboard and Wi-Fi adaptor during the initial setup phase. Expect to pay between £30 and £40 for all the kit you need from the likes of The PiHut (www.thepihut.com) or Pimoroni (https://shop.pimoroni.com). You’ll need a Pi Zero (obviously), case, power adaptor, Wi-Fi adaptor, microSD card, powered USB hub and accessories. If you’re willing to spend a little more, than the Raspberry Pi Model B+ costs £19.20 or the quad-core Pi 2 Model B costs £28 (http://uk-rsonline.com), not including power and Wi-Fi adaptors, micro SD card and case. Both come with Ethernet port for wired networking, plus four USB ports and full-size HDMI port – choose the Pi 2 if you plan to run a media server. You’ll need a keyboard for the initial configuration of OpenELEC, but once those steps are complete, you’ll be able to control OpenELEC remotely via your web browser or using a free mobile app. You’ll also need somewhere to store your media. If you only have a small (sub-50GB collection), then splash out for a 64GB microSD card and store it locally; otherwise attach a USB hard drive or even store your media on a NAS drive and connect over the network. Note the latter option will slow things down considerably, and you may experience buffering, particularly if connected via Wi-Fi.


OpenELEC Tutorial SSH access Switch on SSH and you have access to the underlying Linux installation via the Terminal (use ssh root@192.168.x.y substituting 192.168.x.y with your OpenELEC device’s IP address. The password is ‘openelec’). The main purpose for doing this is to configure OpenELEC without having to dive into System > OpenELEC. Start by typing ls -all and hitting Enter – you’ll see the core folders are hidden by default. Basic commands are supported – such as ifconfig for checking your network settings, and top to see current CPU and memory usage. There’s not an awful lot you can do here – the idea is to give you access to useful tools only. Network settings in OpenELEC are

controlled by the connman daemon, eg – to change these, navigate to storage/. cache/conman where you’ll find a lengthy folder name beginning wifi_. Enter this folder using cd wifi* and then type nano settings to gain access. If you’d like to set a static IP address from here, change the following lines: IPv4.method=manual Then add the following three lines beneath IPv6.privacy=disabled: IPv4.netmask_prefixlen=24 IPv4.local_address=192.168.x.y IPv4.gateway=192.168.x.z Replace 192.168.x.y with your chosen IP address, and 192.168.x.z with your router’s IP address (get this using the ifconfig ). Save your changes and reboot.

We’ve put both Raspberry Pi builds, plus the generic 64-bit build of OpenELEC 6.0.1, on the LXFDVD to help you get started – alternatively, you can download the latest version from http://openelec.tv/get-openelec. The files are compressed in TAR or GZ format, so you’ll first need to extract them. The simplest way to do this is using your Linux distro’s GUI – in Ubuntu, eg, copy the file to your hard drive, then right-click it and choose ‘Extract Here’.

Build, install and configure

Now connect your micro SD card to your PC using a suitable card reader (you can pick one up for under £3 online) and use the $ dmesg | tail command or Disks utility to identify its mountpoint. Once done, type the following commands – which assume your drive is sdc and that your image file is in the Downloads folder. $ umount /dev/sdc1 $ cd Downloads $ sudo dd if=OpenELEC-RPi.arm-6.0.1.img of=/dev/sdc bs=4M You’ll want to use sudo dd if=OpenELEC-RPi2. arm-6.0.1.img of=/dev/sdc bs=4M if installing OpenELEC on the Raspberry Pi 2. Wait while the image is written to your micro SD card – this may take a while, and there’s no progress bar, so be patient (time for a cup of tea, perhaps?). Once complete, unmount your drive and then eject it. Insert the micro SD card into the Pi, connect it up to monitor and keyboard and switch it on. You should immediately see a green light flash, and the screen come on. The OpenELEC splash screen will appear, at which point it’ll tell you it’s resizing the card – it’s basically creating a data partition on which you can store media locally if you wish. After a second reboot, you’ll eventually find yourself presented with an initial setup wizard for Kodi itself. If you’ve not got a mouse plugged in, use Tab or the cursor keys to navigate between options, and Enter to select them. Start by reviewing the hostname – OpenELEC – and changing it if you’re going to run a media server and the name isn’t obvious enough already. Next, connect to your Wi-Fi network by selecting it from the list and entering your passphrase. You can then add support for remote SSH access as well as Samba (see SSH Access box, above).

If you’d rather configure OpenELEC via a Terminal, you’ll find a limited number of commands available.

You can now control Kodi remotely if you wish via your web browser: type 192.168.x.y:80 into your browser (substituting 192.168.x.y with your Pi’s IP address). Switch to the Remote tab and you’ll find a handy point-and-click on-screen remote to use – what isn’t so obvious is that your keyboard now controls Kodi too, as if it were plugged into your Pi directly. You’ll also see tabs for movies, TV Shows and music – once you’ve populated your media libraries you’ll be able to browse and set up content to play from here. This approach relies on your PC or laptop being in line of sight of your TV – if that’s not practical, press your tablet or phone into service as a remote control instead. Search the Google Play store for Kore (Android) or the App Store for Kodi Remote (iOS) and you’ll find both apps will easily find your Pi and let you control it via a remote-like interface. By default, OpenELEC uses DHCP to connect to your local network – if your Pi’s local IP address changes, it can be hard to track it down in your web browser for remote configuration. Change this by choosing System > OpenELEC > Connections, selecting your connection and hitting Enter. Choose ‘Edit’ from the list and pick IPv4 to assign a static IP address you’ll be able to use to always access Kodi in future. You can simply

Quick tip By default, you only need a username (‘kodi’) to connect your remote PC or mobile to control Kodi – it’s probably a good idea to add a password too – navigate to System > Settings > Services > Web Server to add a password and change the username.

Kodi employs the use of scrapers to automatically grab artwork and metadata for your media files based on their filename and folder structure.

Save money, go digital See www.myfavouritemagazines.co.uk/linsubs www.techradar.com/pro

April 2016 LXF209 81


Tutorial OpenELEC stick with the currently assigned address, or pick another. Make sure you select ‘Save’ to enable the change. If all of this sounds like too much bother, check out the box on SSH (SSH Access, p81) for a way to change the underlying configuration files instead.

Set up libraries

The first thing to do is add your media to your library. Kodi supports a wide range of containers and formats, so you should have no problem unless you’ve gone for a particularly obscure format. Check the box (see Add Content to your Library, below) for advice on naming and organising your media so that allows Kodi to recognise it and display extra information about TV shows and movies. This uses the help of special ‘scrapers’: tools that extract metadata from online databases such as movie titles, TV episode synopses and artwork to pair them with your media files for identification. Where should you store this local content for Kodi to get at it? If your micro SD card is large enough – we’d suggest 64GB or greater – then you can store a fair amount of video and music on there. You can transfer files across the local network – open File Manager and opt to browse your network. Your OpenELEC device should show up – doubleclick the file sharing entry and you’ll see folders for Music, Pictures, TV Shows and Videos – simply copy your files here to add them to your library. Once done, browse to Video or Music and the media files should already be present and accounted for, although at this point in time they’ve not been assigned a scraper to help you identify them yet. It can be slow copying files across in the network – you can transfer files directly to the card when it’s mounted in a card reader on your PC, but you’ll need to access File Manager as root to do so – in Ubuntu, eg, typing $ gksudo nautilus and hitting Enter will give you the access you need. A simpler option – if you have a spare USB port on your Pi – is to store your media on an external thumb or hard drive. Just plug the drive into your Pi, browse to Videos or Music and choose the ‘Add…’ option. Click ‘Browse’ and select the top-level folder containing the type of media you’re adding – TV, movies or music. If you’ve plugged in a USB device, you’ll find it under root/media, while NAS drives are typically found under ‘Windows Network (SMB)’. Once selected, click ‘OK’. The Set Content dialogue box will pop up – use the up and down arrow buttons to select the type of media you’re

Quick tip Want to update OpenELEC to the latest build? First, download the latest update file (in TAR format) from http://openelec. tv/get-openelec and open File Manager and click Browse Network. Double-click your OpenELEC device and copy the TAR file into the Update folder. Reboot OpenELEC and you’ll find the update will be applied.

The Amber skin is a beautiful alternative to the more functional Confluence default. Sadly, there’s no access to the OpenELEC configuration menu from it.

cataloguing and verify the selected scraper is the one you want to use. Check the content scanning options – the defaults should be fine for most people – and click ‘Settings’ to review advanced options (you may want to switch certification country to the UK for movies, eg). Click ‘OK’ twice and choose ‘Yes’ when prompted to update the library. Once done, you’ll find a new entry – Library – has been added to the media menu on the main screen. This gives you access to your content with filters such as genres, title or year to help navigate larger collections. Now repeat for the other types of media you have. If you want to include multiple folder locations within single libraries, you’ll need to browse to the Files view, then right-click the library name (or select it and press c on the keyboard) to bring up a context menu. Select ‘Edit Source’ to add more locations, and ‘Change Content’ to change the media type and scraper if necessary. The smartest thing to do with any digital media library is host it on a media server, which allows you to easily access it from other devices on your network and – in some cases – over the wider internet. Kodi has UPnP media server capabilities that work brilliantly with other instances of Kodi on your network as well as making your media accessible from other compatible clients. Media servers can be quite demanding, so we don’t recommend using a Pi Zero or Pi Model B+. Instead, set it up on your most powerful PC (or Pi 2) and use OpenELEC to connect to it as a client. As media servers go, Kodi’s is rather basic. If you want an attractive, flexible server then see our Emby guide [Features,

Add content to your library Kodi works best with your locally stored digital media, but for it to recognise your TV shows from your music collection you need to name your media correctly and organise them into the right folders too. Kodi supports the same naming convention as its rival services Emby

[see Features, p32, LXF204] and Plex – use the following table to help you: Need to rename files in a hurry? Then Filebot (www.filebot.net) is your new best friend [see LXF204’s lead feature, p32 for a detailed guide on how to use the renamer].

Type

Folder Structure

Syntax

Example

Music

Music\Artist\Album

artist – track name

Music\David Bowie\Blackstar\david bowie – lazarus.mp3

Movies

Movies\Genre\Movie Title

title (year)

Movies\Sci-Fi\Star Trek\star trek (2009).mkv

TV shows

TV\Genre\Show Title\Season

tvshow – s01e01

TV\Sci-Fi\Fringe\Season 5\fringe - s05e09.mkv

Music videos

Music Videos\Artist

artist – track name

Music Videos\A-ha\a-ha – velvet.mkv

Name your media files up correctly if you want them to appear fully formed in your media library.

If you missed last issue Head over to http://bit.ly/MFMissues now! 82 LXF209 April 2016

www.linuxformat.com


OpenELEC Tutorial p32, LXF204]. Pair this with the Emby for Kodi add-on and you can access your Emby-hosted media without having to add it to your Kodi library. A similar add-on exists for users of Plex Media Server too, PleXBMC (http://bit.ly/PleXBMC), providing you with an attractive front-end. If you want access to other UPnP servers via Kodi without any bells and whistles, then browse to System > Settings > Services > UpnP/DLNA and select ‘Allow remote control via UPnP’. You can also set up Kodi as a media server from here: select ‘Share my libraries’ and it should be visible to any UPnP client on your network, although you may have to reboot. Performance is obviously going to be an issue on lowerpowered devices, such as the Pi, and while the Pi 2 is pretty responsive out of the box, you may find the Pi Zero struggles at times. It pays, therefore, to try and optimise your settings to give your Pi as much resources as it needs to run smoothly. Start by disabling unneeded services – look under both System > OpenELEC > Services (Samba isn’t needed if you’re not sharing files to and from Kodi, eg) and System > Settings > Services (AirPlay isn’t usually required). Incidentally, while

you’re in System > Settings, click ‘Settings level: Standard’ to select first Advanced > Expert to reveal more settings. One bottleneck for Pi devices is dealing with large libraries – give it a helping hand by first going to Settings > Music > File lists and disabling tag reading. Also go into Settings > Video > Library and disable ‘Download actor thumbnails’. You can also disable ‘Extract thumbnails and video information’ under File Lists, but you’ll lose a lot of eye candy and the thumbnail caching for future use. The default Confluence skin is pretty nippy, although if you suffer from stutter when browsing the home screen, consider disabling the showing of recently added videos and albums: select Settings > Appearance, then click Settings in the righthand pane under Skin. Switch to ‘Home Window Options’ and de-select both ‘Show recently added…’ options. Speaking of Confluence, if you don’t like the default skin, then try Amber – it’s beautiful to look at, but easy on system resources. You do lose access to the OpenELEC settings when it’s running, but you can always switch back to Confluence temporarily or use SSH for tweaks, if necessary. LXF

Add catch-up TV to your streaming stick

1

2

Add BBC iPlayer

Browse to Videos > Add-ons. Select ‘Get more...’, then scroll through the list and find ‘iPlayer WWW’. Select this and choose ‘Install’. Once installed you’ll be able to access it through Videos > Add-ons for access to both live and catchup streams. Configure subtitles and other preferences via System > Settings > Add-ons > iPlayer WWW.

3

Finish installation

4

Now select ‘Install from repository’ followed by .XunityTalk Repository > Video add-ons, scroll down and select ITV. Choose Install and it should quickly download, install and enable itself. Go to Videos > Add-ons to access live streams of six ITV channels, plus access the past 30 days of programmes through ITV Player – a big improvement on the standard seven days offered on most platforms.

Get ITV Player

Navigate to System > File Manager. Select ‘Add Source’ followed by ‘<None>’, enter http://www.xunitytalk.me/xfinity and select ‘Done’ followed by ‘OK’. Hit Esc then choose System > Settings > Add-ons > Install from ZIP file. Select xfinity from the list of locations, select ‘XunityTalk_Repository.zip’, hit Enter and wait for it to be installed.

UKTV Play

Follow the instructions for ITV Player to add http://srp.nu as a source, then add the main repository via SuperRePo > isengard > repositories > superepo. Next, choose Install from repository > SuperRepo > Add-on repository > SuperRepo Category Video. Finally, add UKTV Play from inside the SuperRepo Category Video repo to gain access to content from UKTV Play’s free-to-air channels.

www.techradar.com/pro

April 2016 LXF209 83


Swift

Swift: Querying the GitHub API Try your hand at developing a tool for a real API using Swift for Linux, with Paul Hudson to guide you (and cheat a bit where needed).

Open a terminal window and create a directory called gitcommits. Change into that directory and run touch Package.swift . This file tells Swift how to configure building our project, but if you just create an empty file it will use sensible defaults.

Boilerplate code and optionals

Our expert Paul Hudson

is an awardwinning developer and author who’s on a mission to make the world switch to Z shell. Give it up, all you Bash users!

S

wift is a Marmite programming language. For the benefit of non-Brits that means you either love it or hate it, but very few people read about Swift then say ‘meh’. There are good reasons for such a divisive response: it’s from Apple (boo!) but it’s open source (yay!); it’s type-safe (yay!) but its optionals and closures confuse people (boo!); its syntax changes often (boo!) but it’s fast to compile (yay!); and so it goes on. If you’re reading this, it means you’re keen to give the new language a try, and we hope you followed the tutorial from Mihalis Tsoukalos last issue [Coding Academy, p84, LXF208]. If not, you’ll find it on your LXFDVD and you should read it now: We’re assuming you’ve completed that tutorial before continuing, so we won’t be telling you again how to install Swift or what its basic features are. Instead, we’re going to put your new-found Swift skills to use by building a real project that fetches GitHub commit data for a repository (repos) and displays it interactively for the user. Warning: Swift for Linux is in early alpha stage. Yes, it’s fun to hack with, and yes, we think it’s good to stay ahead of the curve, but if you’ve ever used Swift on OS X or iOS you’ll notice huge gaps in functionality that are still being filled, but More on this later!

84 LXF209 April 2016

www.linuxformat.com

Using a text editor of your choice, create a new file called main.swift in the gitcommits directory. It’s good practice to put individual classes and structs into their own files, but to keep things simple here we will put all our own code into main.swift. We’re going to create a new class called GitCommits, which will handle the fetching and presentation of our commit data. If we were using Swift on iOS this would be a trivial task, but on Linux quite a few things aren’t implemented so we’re using workarounds instead. Add these two lines at the start of main.swift: import Foundation import Glibc The first line brings in a selection of standard functionality that we’ll be drawing on, but the second one imports the standard C library and will be used to work around some of Swift’s Linux shortcomings. To begin with, we’re going to declare a new struct type called GitCommits. When we create one of these, we will pass in the name of a repos to read, and have it print out a message so we can see everything is working. Add this to main.swift below its two existing lines: struct GitCommits { init(repo: String) { print("Fetching \(repo)…") } } Swift developers generally prefer structs to classes because they are value types rather than reference types. That means if I try to copy my struct to a different variable, I will end up with two independent values rather than having two variables pointing at the same data. You saw Swift’s string interpolation last issue, so I won’t go over it again. We want to create an instance of that struct, passing in a repos name. We could force one – eg twostraws/ hackingwithswift – but it would be much nicer to let the user enter one. So, beneath the struct that you’ve just added, put this code: print("Please enter a GitHub repository to query:") if let entry = readLine() {


Swift First-class functions You’ve seen strings and integers in this tutorial, but in Swift you can also store functions inside variables. You can even pass functions as parameters and send them back as return values, just as if they were a string or an integer. On your LXFDVD you’ll find I’ve taken this project a step further by making the present() method accept an optional function: if the function is provided, it’s used to filter the filteredCommits array so that it shows only some values. Otherwise, the full array is shown. The

syntax for this is a bit overwhelming at first: func present(filter: ((Commit) -> Bool)?) { That means ‘the filter parameter will either be nil, or it will be a function that accepts a Commit instance and returns true or false.’ When this is put through Swift’s filter() method, any commit that returns false will be removed from the resulting array. Using this technique, it becomes no work at all to add more cases to the mainLoop() method to handle various user options: we just catch each

let commits = GitCommits(repo: entry) } The readLine() function, we see above, is built into Swift’s standard library, and accepts a line of input from the user and returns a String?. If you remember from last month, the question mark is important: a regular string can have no characters (“”) a word (“Hello”), or the complete works of Shakespeare. An optional string – written as String? – can have all those values, but can also hold ‘nil’, which means ‘there is no value.’ It’s important to understand the difference between ‘an empty string’ and ‘has no value’. In the case of readLine() , you might get an empty string back, meaning that the user pressed return without typing anything in. You might get a string of letters back, which is what the user typed. But you might also get back the null value ‘nil’: the user pressed Ctrl+d to signify end of input. Clearly you can’t treat ‘nil’ like a string: you can’t measure it; you can’t read its letters; and you can’t translate it. It’s empty memory, not a string. So Swift makes sure you use optionals safely by unwrapping them, which is what we do in the if let line: it means ‘get the result of readLine(), and, if it has a value, unwrap it put it into entry.’ So, while readLine() returns String?, entry will be a regular String: the optional has been unwrapped. Once we know that we have a real string to work with, we create a GitCommits instance using it, which in turn means the print() function is called to show some debugging information. Back in your terminal, run swift build to compile the code, then .build/debug/gitcommits to run the program. Swift automatically named the binary after our directory’s name. When the app runs, you can type whatever you want then hit return, and you should see the same text printed out. We’ve been working with GitHub API a lot recently, because it’s one of the few APIs with data that’s interesting to developers (who changed what source code and why) as well as being completely open – anyone can query it without having to register for authentication credentials. Using this approach is rate limited to 60 requests per hour, but that’s more than enough for our purposes. That being said, we’ll also need an external library to handle parsing JSON, so please look for the file TidyJSON. swift on your LXFDVD and copy it into the gitcommits folder. TidyJSON is an open-source library that makes JSON parsing easy, so it’s perfect for our use here. Just copying it into the folder is enough to add it to our project: Swift automatically pulls in all the source code files it finds.

key press, then call present() with a different filter. Swift knows each filter function is designed to work on one Commit instance, so the code to filter the array uses two neat shortcuts: 1) inside the filter function, the commit in question is referred to as $0, and 2) if you write only one line of code, its value is automatically assumed to be the return value for the filter, eg: $0.email.containsString("@apple") That will return true if the email address for the commit contains “@apple” anywhere.

We’re going to parse our JSON into individual instances of a new struct, called Commit. This will store: the name and email address of the committer; the message they attached along with the date. Add this to main.swift, before the existing GitCommits class: struct Commit { var name: String var email: String var message: String var date: String } With that new struct defined, we can update the GitCommmits struct so that it stores an array of Commits – add this just before the init() method: var commits = [Commit]()

Fetching JSON

Right now, the init() method does only one thing, which is to print out a message repeating the name of the repos to fetch. We need to update it to do much more, such as: 1 Clean out the existing commits array. 2 Fetch the commits using the GitHub API 3 Convert the JSON into Commit objects The first update is trivial, the second is tricksy because of Swift Linux’s constraints, and the third is harder because you need to learn a few things first. So, let’s tackle them in order: leaving the existing print() call inside init() , place this below: commits.removeAll() Swift has delightfully self-documenting APIs, and the meaning of this one should be apparent: remove all the items

Our finished program runs on the command line and lets users view and filter GitHub commits. See the LXFDVD for extra features!

Never miss another issue Head to http://bit.ly/LinuxFormat www.techradar.com/pro

April 2016 LXF209 85


Swift than using ‘try’, TidyJSON returns two values from one method call: an optional JSON object representing the parsed string, and an optional string representing the error message if there was one. We don’t have time to care about the error message here, so instead we’re just going to read the first value, represented as 0 in the return value from parse(): putting that through if/let means we can be sure we get valid objects back from parsing our commit string.

Creating commit instances

Our project directory must have a Package. swift file inside it, but if you don’t want to do any work then just create an empty file to have Swift use smart defaults.

Quick tip Git is a version control system, which means it tracks changes to source code when a team works together. Each time someone makes a group of changes, they write a few words explaining what changed, then push those changes to GitHub in one bundle. That bundle is one ‘commit’.

from the commits array. That’s the first item done! We said item two is hard because of Swift Linux constraints. Like we said earlier, Swift for Linux is in an early alpha stage, and this next step is made hard because some functionality is missing. Specifically, Swift’s strings have a built in method for loading data from a remote server, which is perfect for fetching GitHub API data. Sadly, that isn’t available on Linux yet, and neither are several easy alternatives. So, instead we’re going to cheat: we’re going to use the standard C library’s system() function to call the curl command line program to fetch text for us. We can then load that into a Swift string because that API is implemented. Yes, it’s a hack, but it’s easy to do and it solves our problem – Hurrah! Add these two lines below the one you just wrote: let urlString = “https://api.github.com/repos/\(repo)/commits” system("curl -s \(urlString) > commits") Both of those use string interpolation to execute a command on the command line. We should stress that this is really not an ideal solution by any means because it’s letting users execute any command they want using the system() function. Still, if they can run your Swift program, they could run anything else too, so it’s not that big a problem. If you were wondering, the –s flag to curl means ‘silent mode’, which stops it from printing progress to the screen. With the commit data downloaded, we need to load it into a string then hand it to TidyJSON to parse. Swift has a simple and efficient way of doing the former, but TidyJSON has its own quirky way of doing the latter, so you’ll need to learn both. Add these five lines to your init() method below the previous ones: if let contents = try? String(contentsOfFile: “commits”, encoding: NSUTF8StringEncoding) { if let json = JSON.parse(contents).0 { // more code to go here } } The first line is how Swift loads the contents of a text file. There would be a slightly easier way of doing this if Swift on Linux weren’t quite so skeletal, but as things stand we need to explicitly tell Swift that the file has UTF-8 string encoding. That try? part means ‘this call might fail, eg if the file doesn’t exist, and if it fails just send back nil rather than a string.’ We can then use the if/let syntax from earlier to make sure that we actually have a value before continuing. The second line is how TidyJSON handles errors. Rather

We’ve downloaded the API data from GitHub, loaded it into a string, and parsed it using TidyJSON. This means it’s now ready for us to convert into Commit instances, but to do that you will need to learn three new things: fast enumeration, nil coalescing, and memberwise initialisation. Fast enumeration exists in many languages and it’s a way of looping through an array. It usually means, ‘loop over this array, placing each item in a variable so I can manipulate it.’ But in TidyJSON it’s a bit more cunning because you get back two thing: a key and a value. This is because JSON uses dictionaries as often as it uses arrays. Our JSON will be an array, so we can tell Swift to ignore the key by giving it the special variable name ‘_’, ie an underscore. Nil coalescing sounds hard, but is so easy and useful you’ll be relying on it a great deal. As you’ve seen, optional values are common Swift: ‘This integer might have a person’s age in there, or it might have nil because we don’t know their age.’ But to use these values safely you must unwrap them using if/let, which can be annoying. So, Swift has a special operator, ??, that lets you specify a default value to use if another value is nil. So, we can write code like this: let age = getAge() ?? 21 That means ‘run the getAge() function and puts its result into the age constant. But if that function returns nil rather than an integer, use 21 instead.’ This means the age constant will always contain an Int rather than an Int?. That is, it will never be optional because either getAge() returns a valid number or the default is used. The nil coalescing operator is very helpful when parsing JSON, because TidyJSON returns almost everything as an optional. This isn’t TidyJSON being contrary, it’s just that the very nature of processing text is inherently unsafe: a value that is there today might disappear or be renamed tomorrow. Remember, Swift wants your code to be crash-proof, so every time you ask for a TidyJSON value we’ll be using nil coalescing to provide a meaningful default if that value can’t be found. Finally, memberwise initialization is a helpful feature of Swift structs: we defined four properties for our struct (name, email, message and date), and Swift will automatically let us create Commit instances using those four values. Enough talk: time for some code. Open the URL https://api.github.com/repos/apple/swift/commits in your web browser so you can see the structure of the GitHub data, then add this code to replace the more code to go here comment in the init() method: for (_, value) in json { let name = value["commit"]["committer"]["name"].string ?? “Anonymous” let email = value["commit"]["committer"]["email"].string ?? “tcook@apple.com” let date = value["commit"]["committer"]["date"].string ??

Never miss another issue Subscribe to the #1 source for Linux on page 30. 86 LXF209 April 2016

www.linuxformat.com


Swift How Swift is evolving

“1984-01-24T10:00:00Z” var message = value["commit"]["message"].string ?? “I made Linux Swift better.” message = message.stringByReplacingOccurrencesOfStrin g("\n”, withString: " “) let commit = Commit(name: name, email: email, message: message, date: date) commits.append(commit) } As you can see, TidyJSON lets us deep into individual commits: we look for commit > committer > name in the JSON structure, then pull out its string value. If any of those don’t exist for whatever reason, TidyJSON will send ‘nil’ back, which is where our nil coalescing operator will take over and return a sensible default: Anonymous, tcook@apple.com, and so on. To make things look nicer, we’ve used the long-but-selfexplanatory method stringByReplacingOccurrencesOfString() to replace line breaks with spaces in commit messages. You’re welcome to remove this if you want, but it makes the output a lot harder to read!

Making a simple UI

At this point, our commits array is full of interesting things, but the program still doesn’t do very much because nothing is visible to the end user. We’re going to fix that now by showing all the commits to the user as a simple terminal listing, then make it better later. The simplest version of our UI needs to print all the commits, display some options to the user, then ask them what to do. We’ll be using readLine() to read and process user commands, starting with a couple of commands. We’re going to create two new methods in a moment, but first you need to learn something new: the enumerate() method. This is a specialised form of fast enumeration, and is particularly pleasing if you’ve tried to accomplish the same thing in other languages: it loops over each item in an array like regular fast enumeration, but also gives you the index of that item in the array, like C-style array loops. Add these two new methods below the init() method: func present() { let filteredCommits: [Commit] filteredCommits = commits for (index, commit) in filteredCommits.enumerate() { print("\(index + 1) \(commit.name) <\(commit.email)> at \ (commit.date):") print("\t\(commit.message)\n") } showOptions() } func showOptions() { print("OPTIONS: ‘1’ to show everything, ‘2’ to show only Apple engineers, ‘3’ to show only pull request merges, ‘x’ to quit.“) } The showOptions() method is trivial, because it just prints instructions to the screen. The present() method, on the other hand, does a few curious things: you’ll see me using enumerate() so that I can print a numbered list of commits to the screen. But the commits array that gets used is actually a copy called filteredCommits . Yes, we aren’t actually filtering

As you’ve seen in this tutorial, Swift for Linux is currently lacking some basic features that are taken for granted on iOS and OS X. We’ve worked around them here, but Apple is working hard to fill these holes as quickly as possible. But don’t get too comfortable: even as

those holes are filled, Apple is already planning to introduce a series of API breakages to help make its many thousands of APIs more ‘Swifty’. Hopefully this won’t affect the code we’ve written here, but we’ll have to wait until June 2016 to know for sure.

anything right now: that’s a placeholder for code you’ll find on the LXFDVD. With those two new methods in place, we now need to add a third method that handles reading user input until they ask to quit. This needs to call present() when it’s first called, then enter an infinite loop – a loop that will never terminate until we ask for it to stop – so that it can continually read user input and process it. When the user types 1 we’ll show our list of commits. When they type x we’ll exit. When they type anything else we will, for now, just reprint the options. Note that we’re going to convert the user’s input to lower case letters to avoid problems. Add this third method below the previous two: func mainLoop() { present()

Quick tip Get the code online by browsing to linuxformat.com or download the code direct from http://bit.ly/ LXF209swift

mainLoop: while true { if let input = readLine() { switch input.lowercaseString { case “1": present() case “x": break mainLoop default: showOptions() } } } } You’ll notice we snuck in an extra piece of learning for you: the infinite loop ( while true ) has a label name before it, called mainLoop . When the user types x to quit, we can run break mainLoop to have Swift exit not only the swift/case block but also the while true loop, all in one line of code. Note that Swift doesn’t have the implicit case fallthrough that plagues languages such as C and PHP – good riddance! To make all this new code work, you just need to modify the very first code we wrote like so: if let entry = readLine() { let commits = GitCommits(repo: entry) commits.mainLoop() } You should be able to build and run the program now to see it in action – try using apple/swift as the repo name (without the quotes) to see some example data. We covered a lot here, but there’s so much more we want to show you. Sadly, we’re out of space, so instead we’re going to cheat: check out the box (First-class functions, p85) to see something that will either blow your mind or make you a Swift convert. Like I said, very few people look at Swift and say ‘meh’, so I hope you find it an exciting, useful, modern and above all safe programming language to work with. LXF

www.techradar.com/pro

April 2016 LXF209 87


MongoDB

MongoDB: Build a blog

Mihalis Tsoukalos adds the Bottle Python framework to MongoDB.

Our expert Mihalis Tsoukalos

is a database and Unix admin, mathematician and programmer. Surprisingly, he confesses to enjoying learning new things.

This is an extract of the generated output from the sampleData.py script that prints the values of the ‘x’ key from the documents of the ‘sampleData’ collection.

T

his tutorial will use the MongoDB knowledge you already have and combine it with Bottle, a Python framework, in order to create a blog site that stores its data on a MongoDB database. The Bottle framework will be mainly responsible for the user interface of the website as well as its logic. We’re assuming that you’ll have both MongoDB and PyMongo already installed on your Linux distro. If you don’t feel comfortable with MongoDB and PyMongo – the Python MongoDB driver – we’d recommended revisiting the past two issues [Coding Academy, p84, LXF207 and Coding Academy, p88, LXF208] to learn more about using the driver and MongoDB administration, respectively). At the end of this tutorial, you’ll have a beautiful blog site written in Python that uses MongoDB to store its data and Bottle to display it.

The Bottle framework

Bottle is a fast, simple and lightweight WSGI micro webframework written in Python. The whole framework is a single file module that has no dependencies other than the Standard Library of Python. At the time of writing, the latest stable Bottle version is 0.12.9 and you can install Bottle by

88 LXF209 April 2016

www.linuxformat.com

executing the following command as root with $ pip install bottle . On a Debian system this will install Bottle at /usr/ local/lib/python2.7/dist-packages/bottle.py and at /usr/ local/bin/bottle.py. However, as Bottle doesn’t depend on any external Python libraries, you can download bottle.py and put it inside the directory that you use for development: $ wget http://bottlepy.org/bottle.py $ wc bottle.py 4107 15384 156332 bottle.py The “Hello World!” program in Bottle looks like this: from bottle import route, run, template @route('/hello/<user>') def index(user): return template('<h2>Hello World from {{user}}!</h2>’, user=user) run(host='localhost’, port=1234) The run method that’s imported can be used to run the application in a development server, which is the best way to test your application as you write it. The route method tells the application about the supported URL requests as well as how to handle them using Python functions. Routing in Bottle applications is implemented by calling a single Python function for each supported URL. This isn’t a dumb Hello World program because it also displays the name of the user which is included in the URL. As you can see, you define the user variable and then you pass it to the template() function. You can test this simple


MongoDB

Basic CRUD commands CRUD stands for Create, Read, Update and Delete, which are the basic operations that can be performed on any database. In order to be able to follow this MongoDB tutorial, it would be very helpful to bear the basic CRUD commands in mind so that you can check what the Python scripts do or don’t do. You can get one random document from a collection using findOne() : > db.sampleData.findOne() In the code below, the first command returns all documents from a collection whereas the second command returns all documents that have the n key set to the 324 value: > db.sampleData.find()

> db.sampleData.find({n: 324}) The main difference that you’ll discover between find() and findOne() is that the former function can return multiple documents with the help of a cursor, whereas the latter will randomly return a single BSON document. The next example sorts the output of find() based on the x field in descending order: > db.sampleData.find().sort( { x: -1 } ) You can also insert a document into MongoDB using the follow: > db.sampleData.insert( { “x": 23, “y":13 } ) Similarly, you can delete a document that matches certain criteria in the following way: > db.sampleData.remove({"y": 13})

web page by executing the Python script as follows: $ python hw.py Bottle v0.12.8 server starting up (using WSGIRefServer())... Listening on http://localhost:1234/ Hit Ctrl-C to quit. 127.0.0.1 - - [25/Jan/2016 16:28:26] “GET /hello/tsoukalos HTTP/1.1” 200 36 You should now go to your favourite web browser and point it at http://localhost:1234/hello/tsoukalos to see the web page you’ve just created! Please note that trying to get the http://localhost:1234/hello/tsoukalos/ URL (that ends with the ‘/’ character) will fail as it’s not configured. As you might have guessed, the host name and the port number of the simple web server are defined inside hw.py with the help of the following code: run(host='localhost’, port=1234) Should you wish to print debugging information, you should turn on the debug mode as follows: run(host='localhost’, port=1234, debug=True)

Connecting Bottle with MongoDB

The connection between Bottle and MongoDB is possible with the help of the Python MongoDB driver. The following example (sampleData.py) uses Bottle to read a MongoDB server and display the values of the ‘x’ key that can be found in the documents of the ‘sampleData’ collection of the LXF database. The MongoDB server used in the Python script is located on the same machine and the script uses the default MongoDB port which is 27017. The Python code of sampleData.py is the following: import pymongo from pymongo import MongoClient from bottle import route, run, template # Get data from MongoDB myData = [] client = MongoClient('localhost’, 27017) db = client.LXF cursor = db.sampleData.find({}, {'_id':0, ‘y':0}) for myDoc in cursor: myData.append(myDoc['x']) @route('/')

WriteResult({ “nRemoved” : 1 }) Please bear in mind that the safest way for you to identify and delete a document is by using its _id field. You can also update an existing document using the following command: > db.sampleData.update({"x": 123}, {$set: { “z": 123}}) WriteResult({ “nMatched” : 1, “nUpserted” : 0, “nModified” : 1 }) A useful way of updating multiple documents all at once is to set the multi option to true ( {multi:true} ) while using the update() function, in the following way: > db.sampleData.update({"x": 23}, {$set: { “anotherKey": 54321}}, {multi:true})

def rootDirectory(): return template('listContents’, data=myData) run(host='localhost’, port=1234) The myDoc['x'] code is used for getting the actual value of the ‘x’ key, which is then put it in the myData list – this list is then passed as a parameter to the template() function. The find() command used doesn’t return the values of the ‘y’ and ‘_id’ fields to save CPU time, which is very handy when you have lots of data on your MongoDB database. As you can see, the sampleData.py script needs one external file in order to run – this is how Bottle organises its projects. Although it might be possible to include the code of an external Bottle file inside the main Python script, it’s better to save such code separately, especially when you are dealing with large projects that support multiple URLs. The use of the ‘listContents’ template, with the help of the template() method, simplifies the code of sampleData. py – the small price that you pay for this is that you have to edit multiple files. The contents of the listContents.tpl file, the TPL extension is automatically used, are the following: <!DOCTYPE html> <html><head> <title>LXF.sampleData: Contents of ‘x’ key</title> </head> <body> <ul> %for myX in data: <li>{{myX}}</li> %end </ul> </body></html> Although template files mainly contain HTML code, they also have access to variables, such as data, which makes them able to generate dynamic content. The contents of the internal data variable are accessed using a for loop that’s coded using a pretty straightforward way. The data is formatted using an HTML list. Before running sampleData. py, as you did with hw.py, make sure that the sampleData collection of the LXF database has data in it. If not, run the following code from the MongoDB shell to insert some data: > use LXF switched to db LXF

Quick tip You can find more information about the Bottle framework at http://bottlepy. org. For more on the Python MongoDB driver go to https:// docs.mongodb. org/ecosystem/ drivers/python.

Never miss another issue Head to http://bit.ly/LinuxFormat www.techradar.com/pro

April 2016 LXF209 89


MongoDB

Quick tip Bottle isn’t the only Python web framework. Other Python frameworks that you can consider are Django (www.django project.com) and Flask (http:// flask.pocoo.org).

> for (var i=0; i<100; i++) { db.sampleData.insert({x:i, y:i*i}); } WriteResult({ “nInserted” : 1 }) > db.sampleData.count(); 100 If you wish to delete the entire sampleData collection to start with an empty one, you should execute: > db.sampleData.drop(); true As saving data on a database takes disk space, it’s good to know how to delete entire collections and databases in a MongoDB database. The following command deletes the entire LXF database along with its data files: > use LXF switched to db LXF > db.runCommand( { dropDatabase: 1 } ) { “dropped” : “myData”, “ok” : 1 } Despite the various similarities between sampleData.py and hw.py, the former is a more advanced program because it connects to a MongoDB database to get its data. If you can successfully execute sampleData.py and get its output on your favourite web browser, you’re good to continue with the rest of the tutorial; otherwise, you should try to correct any errors on your Python code or your MongoDB configuration.

Handlers, Views, Forms and Cookies

Bottle follows the MVC (Model-View-Controller) software pattern in order to separate the different functions of the user interface. The Model is responsible for storing, querying and updating data whereas the View shows how the information will be displayed onscreen, and finally, the Controller holds the logic of the application. Using Python code, Bottle breaks its sites into Handlers, Views and Forms. A URL Handler allows you to specify web pages and bind them to Python code. You can have as many URL Handlers as you want. However, if you believe you’re defining too many of them, you should rethink the design of your web application. As you saw in hw.py, Bottle also supports dynamic routes and regular expressions in its routes. URL handlers are defined with the use of the @bottle. route() command. A View is what the user sees. This is code concerned with displaying information and is usually dynamic content. The View shouldn’t contain any code that should be inside the Controller part and they are implemented using Bottle templates. By default, Bottle searches in ./ and ./views/ for templates. A Form is a way of getting input from the user. Bottle also supports Cookies which can be very useful. Now that you know the basics about Bottle and the structure of its projects, it’s time to start building the blog site.

Being able to create new blog posts is essential for a blog site. This image shows the web page that allows you to write new blog posts.

The structure of the blog site will be created by executing the following commands: $ mkdir blog $ cd blog $ mkdir views $ touch blog.py $ touch views/menu.tpl $ touch views/listPosts.tpl $ touch views/write.tpl $ touch views/showsinglepost.tpl So, the root directory that contains all project files and directories is named blog. The main file which contains the URL rules is called blog.py. Its Python code is the following: import pymongo from pymongo import MongoClient from bottle import route, run, template, request, get, post, redirect import datetime from bson import ObjectId @route('/') def rootDirectory(): return template('menu') @route('/list/') @route('/list') def listAllPosts(): # Get data from MongoDB myData = [] client = MongoClient('localhost’, 27017) db = client.LXF cursor = db.blogposts.find() for myDoc in cursor: myData.append(myDoc) return template('listPosts’, data=myData) @get('/write/') @get('/write') def writeNewPost(): return template('write’, dict(subject="”, body="”, tags="")) @post('/presentnewpost/') @post('/presentnewpost') def presentNewPost(): # Extract the data from the write.tpl FORM title = request.forms.get("subject") post = request.forms.get("body") if title == “": title = “There is no title!” if post == “": post = “This is the text of the post.” postDocument = { “title": title, “text": post, “date": datetime.datetime.utcnow()} client = MongoClient('localhost’, 27017) db = client.LXF # Write them to the database db.blogposts.insert_one(postDocument) # Get the postID which is the _id field of the document postid = str(postDocument['_id'])

We’re #1 for Linux! Subscribe and save at http://bit.ly/LinuxFormat 90 LXF209 April 2016

www.linuxformat.com


MongoDB

Creating a static site using Bottle Although the Bottle framework was developed with dynamic sites in mind, it can also be used for creating static sites as well. The following Python code (static.py) demonstrates this: from bottle import route, run @route('/hello/') @route('/hello') def hello(): return “<h1>Hi to you too!<h1>” run(host='localhost’, port=1234, debug=True) The code above defines the handlers for two specific URLs (‘/hello’ and ‘/hello/’) which are

the only URLs that are supported by static.py. The actual HTML code is embedded inside the PY file so you don’t have to look into too many files when you want to make changes to your site. Additionally, static.py shows that you can support multiple routes with one function, as static.py does with ‘/hello’ and ‘/hello/’ which is very handy. If you try to access a URL that’s not supported by static.py, you’ll get the usual 404 error. Should you wish to print your own error message, you should use the following route which catches everything that can’t be matched by the previous rules:

# print(postid) # Present the document redirect('/post/’ + postid) # Prints an entire post @get("/post/<postid>") def showPost(postid): client = MongoClient('localhost’, 27017) db = client.LXF query = {'_id':ObjectId(postid)} post = db.blogposts.find_one(query) return template('showsinglepost’, post=post) As you can see, blog.py defines various URL handlers. The menu.tpl template file displays the main page of the blog site. The listPosts.tpl template briefly lists all blog posts and allows you to select and view any one of them. The write.tpl template allows you to write a new blog post using a form – you can customise it any way you want. The showsinglepost. tpl view is automatically presented after you’ve created a new post with write.tpl. As you can guess, showsinglepost.tpl is used for showing single posts in general. Before continuing with Bottle, it’s time to talk a little about the schema of the MongoDB database that will hold the data. As you might already know, MongoDB is schemaless which means that two documents that belong to the same collection can have totally different number of keys with the exception of the _id key which is mandatory. This is very important when you are writing code for MongoDB because if you misspell the name of a collection in your code, you will get no error messages, instead a new collection will be created! The same thing will happen if you misspell the name of a key of a document.

Creating blog posts

As usual, a template file will deal with the writing of new blog posts. The Bottle template file (write.tpl) for creating new blog posts is as follows: <!DOCTYPE html> <html><head> <title>Write a Blog Post.</title> </head> <body> <h2>If you do not want to write a new post select one of the following:</h2> <ul> <li><a href="/">home Page</a> </li> <li><a href="/list">List all posts</a></li> <li><a href="/login">User login (not implemented)</a></li>

@route('<mypath:path>') def doSomething(mypath): print mypath return “Your URL is %s but it does not exist!” % mypath The main advantage of this approach is that you can easily convert your website from being a static to dynamic one simply by changing the way that you deal with the URL handlers. We’ve also found that bottle is a handy and quite a sophisticated way to quickly create a prototype for a dynamic site before you start the actual implementation process.

<li><a href="/newuser">User Signup (not implemented)</ a></li> </ul> <form action="/presentnewpost” method="POST"> <h2>Subject:</h2> <input type="text” name="subject” size="120” value="{{subject}}"><br> <h2>Blog Text:<h2> <textarea name="body” cols="120” rows="20">{{body}}</ textarea><br><p> <input type="submit” value="Submit"> </body></html> Each blog post is saved on the MongoDB database. After pressing the ‘Submit’ button, your data is automatically passed to the /presentnewpost URL where the blog post is saved. Last, you’re being redirected to the /post/postid URL to see the post. The postid of each post is the string value of its _id key, which is unique to that collection. The Bottle template for getting a list of all available blog posts: <!DOCTYPE html> <html><head> <title>List Blog Posts Page!</title> </head> <body> <h1>This page shows all existing blog posts.</h1> <ul> %for myX in data: <% link = str(myX['_id']) %> <li><a href="/post/{{link}}">{{myX['title']}}</a></li> %end </ul> </body></html> The following trick allows you to execute Python code inside a Bottle template file: <% link = str(myX['_id']) %> This code is used for creating the correct link for each blog post. It would be interesting to see how your data is stored on the MongoDB collections. As MongoDB is schemaless, you can add new keys to the blogposts collection at any time without any downtime. LXF

www.techradar.com/pro

April 2016 LXF209 91


Got a question about open source? Whatever your level, email it to lxf.answers@futurenet.com for a solution.

This month we answer questions on: 1 First-time installation 2 Midnight Commander problems 3 Gaming on Linux with Wine

1

4 Setting up a solid state drive 5 Dealing with a rebooting router on remote site + Linux on the Pine A64+

You can try most distros by booting from the DVD and selecting the live environment. Once you are happy with it, use the Install option to add it to your computer’s hard drive.

First steps

I struggle to understand the instructions to install Ubuntu 15.10 from the disc. Admittedly, I lack any basic knowledge on computing never mind understanding this written foreign language and the way it’s presented. Young friends suggested I change from Microsoft Windows 7 and I’m keen but they don’t offer any hints. I opened the index.html to find out and was left perplexed. Learn anything? Nope. Conclusion: I must be stupid. I went on to the web for help, I have a PDF of Linux for

Dummies which I have read and read, and I am doubting my intelligence and my sanity. Please, please help in ultra-simple terms! It can’t be this difficult or can it be? Which programmes do I need to load to see DVDs, listen to CDs and play games etc? Jim McRobert

Enter our competition Linux Format is proud to produce the biggest and Get into Linux today! best magazine that we can. A rough word count of LXF193 showed it had 55,242 words. That’s a few thousand more than Animal Farm and Kafka’s The Metamorphosis combined, but with way more Linux, coding and free software (but hopefully less bugs). That’s as much as the competition, and as for the best, well… that’s a subjective claim, but we do sell

92 LXF209 April 2016

Win!

way more copies than any other Linux mag in the UK. As we like giving things to our readers, each issue the Star Question will win a copy of one of our amazing Guru Guides or Made Simple books – discover the full range at: http://bit.ly/LXFspecials. For a chance to win, email a question to lxf.answers@futurenet.com, or post it at www.linuxformat.com/forums to seek help from our very lively community. See page 94 for our star question.

www.linuxformat.com

Starting with a different operating system (OS) is like moving to a foreign country: languages, customs and almost everything is different. It’s made even more difficult because, before using a different OS, you have to install it. As most computers come with Windows already installed, this is something most users haven’t done before, but it’s not that difficult. First of all, you don’t need to install Linux to give it a quick try. Most Linux distros (a distro, or distribution, is simply a Linux OS bundled with useful applications) have a ‘live’ mode. You need to boot your PC from the DVD, to do this you need to hold down the key that pops up the boot menu when you power on. This is usually Esc or one of the F keys, often F11/12 or F2 on laptops. If your PC’s manual doesn’t help try each one in turn. Once you’ve booted from the DVD you’ll see a menu, select the distro you want to try and then pick the option to try without installing (those are Ubuntu’s words, other distros may simple say live). This loads the operating system into memory, running directly from the DVD, so you can experiment with it. You won’t be able to save files or settings and it will be slower, because it’s running from a DVD, but it gives you an idea. When you are ready to install it to your computer, there’s either an install option at the boot menu or an install icon on the desktop of the live system. This starts the distro’s installer, which will guide you through the process. You will be asked some simple questions and a couple where the answers are important, like creating a user and choosing a secure


Answers Terminals and superusers We often give a solution as commands to type in a terminal. While it is usually possible to do the same with a distro’s graphical tools, the differences between these mean that such solutions are very specific. The terminal commands are more flexible and, most importantly, can be used with all distributions. System configuration commands often have to be run as the superuser, often called root. There are two main ways of doing this depending on your distro. Many, especially Ubuntu and its derivatives, prefix the command with sudo , which asks for the user password and sets up root privileges for the duration of the command only. Other distros use su , which requires the root password and gives full root access until you type logout. If your distro uses su , run this once and then run any given commands without the preceding sudo .

password. The most important question is where do you want to install Linux? You normally have the option to wipe the whole disk and install, which will destroy your Windows system and any saved files you have, or to install alongside Windows. The latter option is probably the one you want, it will resize your Windows C drive, install Linux alongside it and set up a menu to pick which you want to use each time your boot the computer. Before doing this, it’s a good idea to boot into Windows and defrag your C drive, it makes the resizing process faster and easier. Extra software is installed via a software centre, you don’t need to download EXE files from random websites. However, you’ll find that a distro like Ubuntu already contains most of the software you need pre-installed. Good luck in your adventures with a new OS and if you need any further help, pop along to our web forums at www.linuxformat.com.

2

Midnight Commander

I run a number of SME servers. Most are now SME 9.1, which is based on CentOS 6. They have Midnight Commander 4.7.0.2 (MC), which

seems to have a problem with VFS handling. Specifically, I use MC to pull individual files out of backups, which are stored in gzipped TAR files. I’m having problems with files from fairly large backup archives with version 4.7.0.2. If I copy the archive to another machine and use MC 4.8.15, I have no problems with the same files. I’m wondering if I can get a CentOS RPM of the latest MC. If not, I’m going to have to install CentOS 6.x, install the dev tools and learn how to build RPM packages. Paul You should be able to find an RPM of a more recent MC for Red Hat at www.rpmfind.net, but don’t be too quick to assume that the success on the second machine is down to the version of MC. There may be other differences between the two systems that have a bearing on this. When you select an archive in MC, it’s unpacked to a temporary directory, then you are shown the contents of that directory. By default, the temporary directory is in /tmp. If you have insufficient space in /tmp to unpack the entire archive, viewing it in MC will fail. So the difference may simply be the amount of free space in /tmp. You can test this by setting the TMPDIR environment variable to point to somewhere with plenty of space, either in your profile or temporarily on the command line when starting MC, like this $ TMPDIR=/ somewhere/big mc . If this doesn’t help and you really do need a later version, you don’t need to learn to build an RPM to do so, just build mc from source on the machine you need to run it on. You only need to build an RPM if you want to distribute the package. First, you need to make sure the necessary development tools are installed: $ yum groupinstall “Development Tools” or $ yum install gcc gcc-c++ make automake Then download and unpack the mc source code, enter the directory created and run: $ ./configure && make && make install

This installs into /usr/local, so you should use Yum to uninstall the older version (which Yum installed to /usr) to avoid any conflicts.

3

Vintage Wine

I’m trying to use Wine to play a PC game called Dawn Of War: Soulstorm and it won’t run. The game seems to have installed correctly. I have an entry for the game in the .wine folder and I have an icon on the desktop which points to the same folder. I dual-boot with Windows 8.1 (not my choice) and Ubuntu 14.04. I’ve installed the game onto my Windows partition and the game works fine there. I get a (very basic) run-dll-32 error message sometimes. When I select the box for a more thorough error message/log file, it doesn’t give me anything. I’m using Ubuntu 14.04 (64-bit), kernel 3.13.0-74-generic, KDE 4.13.3 and Wine 1.6.2. Noob_Computa_Ninja The problem is almost certainly down to your version of Wine. The software undergoes fairly rapid development, with later versions supporting more Windows software. The Wine database found at https://appdb.winehq.org says that this game works fine with version 1.9.1, the latest development release is 1.9.2 while even the stable release is at 1.8, which is much newer than the version you have. Distros like Ubuntu tend to leave software at the version tested when that release of the distro was created, only updating for bugs and security issues. To get feature update releases, you normally have to wait for the next version of the distro. The solution to your problem is almost certainly to install a newer version of Wine, which you do by adding Wine’s PPA to your list of software repositories (repos). Open a terminal and run: $ sudo add-apt-repository ppa:ubuntu-wine/ppa . For the latest development builds, use the repos ppa:wine/wine-builds. Then you should refresh the repos package lists and install the

A quick reference to...

Script hell redirection is useful when you want to capture the output of a single command, and you can capture the error channel as well with a little fiddling. But what if you want to save the output of several commands, or the commands you run are interactive so you still need to be able to see what they ask to give the correct responses? The solution is a program called script. Run this from a terminal, with no arguments. Nothing will appear to happen

S

apart from a message that script has started. Now run some other commands and everything appears to be just the same as normal. But what has happened is that script has opened a subshell and is both displaying and capturing everything from whatever you type, and that’s not only the output from the commands but your input too, even the shell prompts. Some programs behave differently when their output is not a TTY (a shell), eg if you redirect a command’s output to a file, it no longer colourises the output. Because script is running a shell, the output is exactly as you see in the shell.

www.techradar.com/pro

When finished, press Ctrl+d to exit the subshell and you’ll be back in your original shell, with a file called typescript that contains a transcript of your script session. If you specify a file name after the command, script will use that instead of typescript. You can specify a command name with -c . Script will run that command and exit. There are other options you can use, such as --timing , which outputs timing information to a separate file, this is useful with scriptreplay , which outputs the typescript information, optionally with the same timing, without running the commands again.

April 2016 LXF209 93


Answers latest Wine. You can do this in a graphical package manager such as Synaptic, but as you have a terminal open, run these commands: sudo apt-get update sudo apt-get install wine1.9 You could just boot into Windows when you want to play the game, but getting it working with Wine would not only allow you to play it without rebooting, it would give a useful comparison of relative performance in the two environments on the same hardware.

4

Setting up an SSD

I got a Samsung 250GB SSD as a gift thinking that I would stick it in my desktop and play, but I have spent a couple of days reading about people burning their new SSD out. I’m looking for an easy solution before ruining a good bit of kit. towy71 There’s a lot of confusion around using SSDs but they are pretty simple to work with. While they do use flash memory, they shouldn’t be thought of in the same way as cheap flash devices, such as USB memory sticks. With those devices, excessive writes can cause failures, although even they are much better nowadays. SSDs have sophisticated controllers that handle wear levelling and mean that a modern SSD can outlive a spinning rust drive in normal usage. Some things need to be done correctly when setting up an SSD, but modern disk partitioning software takes care of that automatically. If you use an old version of fdisk, you may find that the partitions aren’t correctly aligned which can cause severe performance issues, but it would have to be a very old version. Similar problems arose when drives

Star Question +

Winner!

larger than 2TB arrived that used 4K data blocks. Either use gdisk, parted or GParted and the alignment will be taken care of. There are some filesystems written specifically for SSDs, like F2FS (Flash-Friendly File System) but ext4 is also flash-aware these days. One thing you need to consider is the use of the TRIM feature that SSDs provide. This cleans up data blocks that contain deleted data; a sort of garbage collection for storage. After time, the blocks of deleted data can reduce the drive’s performance, TRIM fixes that. Some filesystems, like ext4, allow this to be done automatically, by mounting with the discard option (just add it to /etc/fstab) but this isn’t always the best approach as the TRIM process itself can slow the drive down while it’s running. The alternative is to run the fstrim command, it works on mounted

filesystems and needs the mount point of the drive as its only argument, although you can also add -v to $ fstrim for more detailed output. You can have this run every week by creating a file in /etc/cron.weekly containing: #!/bin/sh /sbin/fstrim / Make the file executable. If you have more than one filesystem on the drive, add an extra fstrim command for each.

5

Persistent router

I run a Raspberry Pi at a remote site in another country. Unfortunately, the site experiences frequent power outages. When power is restored my modem/router often fails to re-establish its broadband connection, presumably because lots of other routers are trying to do the

This month’s winner is Paul Tyrrell. Get in touch with us to claim your glittering prize!

Pine A64

I recently purchased a pine A64+ on Kickstarter. I’m new to single-board computers and I’m looking for advice on software to use. What do you reckon is the best software to use that runs like Windows 10 and do you know what photo managers would be compatible with the Pine A64? I came across photo managers in LXF206 and I like your first option, Digikam. Would that be able to run on the Pine A64? Paul Tyrrell While the Pine boards look like an interesting alternative to the Raspberry Pi they are very new. At the time of writing, they weren’t on general sale yet and only available from Kickstarter. As such, they are still in something of a DIY phase and

94 LXF209 April 2016

Check the Wine application database to see which version you need for your program. The application may need a newer version than your particular distro provides.

there’s currently no desktop Linux distro available for them – you need to build one yourself. All that’s available now is an Android Lollipop build, that you can download from http://wiki.pine64.org. This is a ZIP file containing an IMG file. This needs to be written to MicroSD card using either dd on Linux. This gives you the familiar (well, it is if you have an Android phone or tablet) Android interface. The developers are working on a release of Ubuntu, which gives a more traditional desktop environment. It’s not available at the time of writing, but may be by the time you read this magazine. Once Ubuntu is available, most Linux software should be possible to build and package for it, at least in theory. The limiting factors here are the number of developers willing to put the time into packaging software for this board and, more significantly, the system’s resources.

www.linuxformat.com

Digikam uses the KDE desktop environment, although it’s possible to run it on Ubuntu. However, KDE is fairly demanding in terms of its memory requirements, and handling many multimegapixel images also requires a lot of memory, so I think this may be somewhat ambitious even for the 2GB version of the Pine64. That’s not to say it’s not possible, but it would struggle to handle the load. Boards like the Pine and the Pi are great for more lightweight applications, and the Pine’s hardware looks like it could be an excellent media centre platform, but it will take time for it to become established and attract the number of users and developers to put together such an environment. Meanwhile, it will run Android and it will make a good platform for learning about the lower level aspects of computers.


Answers same thing at the same time. The Pi remains off-line, often for months, until I can get there and reset the router. A solution would be to find a router that periodically retries to log-on to the exchange until it succeeds. Do you any which do this? David Manley If the router failed to establish a connection it would retry. A more likely scenario is that it thinks it has established a connection but that connection isn’t working. Some modem/routers have an option to regularly reboot. If there’s a time of day you know it is unlikely to be needed, you could have your router reboot every day. An alternative is to find a modem/router that runs the DD-WRT router OS. This is available for a wide range of routers, but not for modems unless the manufacturer implements it. Alternatively, look for a modem/router that allows SSH access as well as the usual web interface. This is often possible as many modem routers run embedded Linux. Then you can run a Cron script that checks whether you are online and if it fails it reboots the router. This is one I used when I had a similar issue where my connection would just stop working until I rebooted the modem: #!/bin/sh HOSTS="mail.isp.com www.google.com www.yahoo.com” reboot() { ssh user@router /sbin/reboot } for i in {1..5}; do for HOST in ${HOSTS}; do

Help us to help you

Running DD-WRT (or OpenWRT) on a router gives you far more control than most of the default firmware.

ping -c 1 -q ${HOST} 2>&1 >/dev/null && exit done echo “All hosts unreachable at $(date)” [[ i -lt 5 ]] && sleep 3m done echo “All hosts failed” reboot This takes a list of hosts that ought to be contactable (make the first one your ISP). It then tries to contact them in turn and exits as soon as one succeeds. If none are available it waits and tries again. If it can’t connect to any hosts after fifteen minutes it assumes the worst and reboots the router. You need to set up passwordless SSH to your router, which involves copying your public SSH key into the router’s web interface or use a hardware switch, like the USB Net Power 8800. This looks like a short mains extension

We receive several questions each month that we are unable to answer, because they give insufficient detail about the problem. In order to give the best answers to your questions, we need to know as much as possible. If you get an error message, please tell us the exact message and precisely what you did to invoke it. If you have a hardware problem, let us know about the hardware. If Linux is already running, use the Hardinfo program (https://github.com/lpereira/hardinfo) that gives a full report on your hardware and system as an HTML file you can send us. Alternatively, the output from lshw is just as useful (http://ezix.org/project/wiki/HardwareLiSter). One or both of these should be in your distro’s repositories. If you are unwilling, or unable, to install these, run the following commands in a root terminal and attach the system.txt file to your email. This will still be a great help in diagnosing your problem. uname -a >system.txt lspci >>system.txt lspci -vv >>system.txt

lead with a USB port that can be used to control it. You plug your modem into the socket, the lead into the power and connect your Pi to the USB port. Then grab the Python script (http://bit.ly/USBNetPower8800) to control it. In the above script, replace ssh with: /usr/local/bin/usbnetpower8800.py reboot 45 The 45 is the delay between the power off and back on again. This unit defaults to off when the power comes on, so you need to send an on command from a script in /etc/rc.local. If you set a short delay in here, so you’re not trying to connect at the same time as everyone else, you may avoid the problem with, eg: #!/bin/sh sleep 5m /usr/local/bin/usbnetpower8800.py on which should fix the issue. LXF

Frequently asked questions…

Installing software I’m new to Linux, where do I go to find software for it? Linux works very differently to Windows in many ways, but none more so that software management. Linux distros provide a centralised software manager. How does that work and where do I find it? It uses a system of repositories containing all the software and a management program to install, update and remove programs, Most distros have something called a “Software Centre” or something similar in their menus, this is where you do everything? So the distros keep their own

copies of everything, isn’t that rather inefficient? Each distro repackages the software to make sure it works with the rest of their packages, avoiding software conflicts. They also digitally sign every package, so even if one is tampered with to add malware, it will not be installed. Keeping everything together also means all dependencies can be resolved. What are dependencies? When a program needs some other software, usually a library, to run; that is called a dependency. Software managers take care of this automatically, so when you install a program, anything else it needs is taken care of.

What about updates? Do I still need to check web sites for these? No, The distro maintainers do this for you, so there is no need for software to “phone home” either. Part of your software manager will periodically download the latest list of all packages and notify you of any updates. I checked the web site anyway and there is a newer version, why am I not being offered it? Remember the part about making sure all software works with other packages? This means that some distros do not make all updates available until the next distro release. However, any updates that deal with security issues or serious bug are made available as soon as possible.

www.techradar.com/pro

What if I want the latest version? Some software projects provide packages themselves, so you can have a later version. Package managers are not limited to the repositories the distro provides, it is possible to add extra ones, and that is usually how projects provide their own packages. Look at the project’s web site for how to do this. It depends on your distros, but those based on Debian/ Ubuntu let you add extra repositories as PPAs. So I’m back at checking websites? No, once you’ve added an outside repository, your software manager will treat that the same as the others and take care of updates and dependencies for you.

April 2016 LXF209 95


On the disc Distros, apps, games, books, miscellany and more…

The best of the internet, crammed into a phantom-zone like 4GB DVD.

Distros

T

he other day I was visiting someone’s house and his son was watching cartoon videos on a media player box. When he finished and went back to the home screen, I saw it was Kodi running on OpenELEC (by coincidence, this was just a couple of days after I’d put OpenELEC on this month’s DVD). Now there’s nothing remarkable about that, except this person’s computer literacy barely extends to a calculator. For years, Linux has been the province of those with a certain amount of technical expertise or curiosity. Yet here we have someone with overthe-counter hardware running Linux. Yes, Linux has technically been in the hands of millions for some years in the form of Android devices, but this was GNU/Linux, what we normally think of when we say Linux, not some different OS running on a Linux kernel. Linux has been on embedded devices, like routers, for a long time, but now we’re seeing devices running something close to desktop Linux, complete with X, in the hands of people who’ve never heard of Linux. This may not be the year of desktop Linux, but it appears that Linus Torvalds’ plan for world domination is happening one device at a time.

Media centre

OpenELEC 6.0.1 You can install Kodi (the media player formerly known as XBMC) on just about any distribution (distro), but if you want a dedicated media box you don’t really want everything that a general-purpose distro includes. This is where OpenELEC comes in – it’s a tiny distro that’s just big enough for Linux to run and display Kodi which is exactly what you want on a media system. We have two versions here, one for 64-bit PCs and one for the Raspberry Pi. Both are in the form of gzipped image files. The Pi one is copied to a card with dd in the usual way. The PC version is copied to a USB stick in the same way, then boot your computer from that stick.

The Daddy

64-bit

Debian 8.3.0 Remix Debian may have long gaps between major releases, but they still release updated point versions from time to time. This is Debian 8.3.0, which isn’t that different to the Debian 8 release, but has enough updates to make it worth including. Debian produce several live DVDs, each one featuring a different desktop. We looked at the alternatives, trying to decide which was best then decided we’d have one of our Desktop Remix live

Important

NOTICE! Defective discs

For basic help on running the disc or in the unlikely event of your Linux Format coverdisc being in any way defective, please visit our support site at: www.linuxformat.com/dvdsupport Unfortunately, we are unable to offer advice on using the applications, your hardware or the operating system itself.

96 LXF209 April 2016

64-bit and Pi

www.linuxformat.com

distros with Cinnamon, GNOME, KDE, LXDE and Mate. The DVD boots to Cinnamon by default, so to try one of the other desktops log out, select the desktop you want from the menu at the top right (pictured, below) then log back in with the user name user and password live. This remix can be installed to your hard drive in the usual way, open the installer from the System Tools section of the menu.


New to Linux? Start here

What is Linux? How do I install it? Is there an equivalent of MS Office? What’s this command line all about? Are you reading this on a tablet? How do I install software?

Open Index.html on the disc to find out Minimalist

Arch Linux for Pi

32-bit

Arch Linux is a distro that has grown greatly in popularity over the last few years. There’s no doubting which piece of hardware has also significantly grown over a similar period and that’s the Raspberry Pi, so it makes sense to bring the two together. While Raspbian is the most common choice, the ARM port of Arch Linux is suited to the lightweight Pi and forms the basis of one of this month’s features. This version of Arch is suitable for the Model A, A+, B or B+ Pis. It will not run on the Pi 2 because that needs a different kernel setup.

Repair & rescue

And more! System tools Essentials Checkinstall Install tarballs with your package manager. Coreutils The basic utilities that should exist on every operating system. HardInfo A system benchmarking tool. Kernel Source code for the latest stable kernel release, should you need it. Memtest86+ Check for faulty memory. Plop A simple manager for booting OSes, from CD, DVD and USB. RawWrite Create boot floppy disks under MS-DOS in Windows. Smart Boot Manager An OS-agnostic manager with an easy-to-use interface.

32-bit & 64-bit

Rescatux 0.40b5 Things go wrong, that’s a fact of life. So it’s good to keep a rescue disc handy. This month’s Roundup compares five such animals and the winner is… Well, we won’t spoil it for those of you that haven’t read the review yet but we have Rescatux on the LXFDVD this month. Most rescue disks concentrate on including all the diagnostic and repair tools you could need and usually boot to either a command line or a very minimal desktop which is just enough to run a web browser. This is

Download your DVD from www.linuxformat.com

great if you are comfortable with said command line tools, but if you have managed to hose your Grub setup and just want a way to get things booting again as quickly as possible, reading documentation on command line options would not be your first choice. Rescatux boots to a desktop containing a window with a number of buttons, each of which fixes a common problem, so you can usually fix a broken bootloader or a lost Windows password in a matter of seconds.

WvDial Connect with a dial-up modem.

Reading matter Bookshelf Advanced Bash-Scripting Guide Go further with shell scripting. Bash Guide for Beginners Get to grips with Bash scripting. Bourne Shell Scripting Guide Get started with shell scripting. The Cathedral and the Bazaar Eric S Raymond’s classic text explaining the advantages of open development. The Debian Administrator’s Handbook An essential guide for sysadmins. Introduction to Linux A handy guide full of pointers for new Linux users. Linux Dictionary The A-Z of everything to do with Linux. Linux Kernel in a Nutshell An introduction to the kernel written by master hacker Greg Kroah-Hartman. The Linux System Administrator’s Guide Take control of your system. Tools Summary A complete overview of GNU tools.

www.techradar.com/pro

April 2016 LXF209 97


Get into Linux today! Future Publishing, Quay House, The Ambury, Bath, BA1 1UA Tel 01225 442244 Email linuxformat@futurenet.com

EDITORIAL

Editor Neil Mohr neil.mohr@futurenet.com Technical editor Jonni Bidwell jonni.bidwell@futurenet.com Operations editor Chris Thornett chris.thornett@futurenet.com Art editor Efrain Hernandez-Mendoza efrain.hernandez-mendoza@futurenet.com Editorial contributors Neil Bothwick, Jolyon Brown, Matthew Hanson, Paul Hudson, Dave James, Kevin Lee, Nick Peers, Les Pounder, Afnan Rehman, Tom Senior, Mayank Sharma, Shashank Sharma, Alexander Tolstoy, Mihalis Tsoukalos Cover illustration www.magictorch.com Cartoons Shane Collinge

ADVERTISING

Advertising manager Michael Pyatt michael.pyatt@futurenet.com Advertising director Richard Hemmings richard.hemmings@futurenet.com Commercial sales director Clare Dove clare.dove@futurenet.com

MARKETING

Marketing manager Richard Stephens richard.stephens@futurenet.com

PRODUCTION AND DISTRIBUTION

Production controller Marie Quilter Production manager Mark Constance Distributed by Seymour Distribution Ltd, 2 East Poultry Avenue, London EC1A 9PT Tel 020 7429 4000 Overseas distribution by Seymour International

LXF 210

Build your own…

Desktop!

will be on sa le Tuesday 12 April 2016

We customise because we can, create your ideal desktop environment, from KDE to Gnome.

Build a Linux PC

Now you have the perfect desktop, get the perfect PC from the components to the how to – it’s all here.

Collaborative editing

Join forces with friends and colleagues to create, update and edit the most powerful documents ever!

The BBC Micro:bit is back!

We take things further with the exciting new education tool from the BBC, create cool projects and code away. Contents of future issues subject to change – we might be trapped under a pile of desktop bricks.

98 LXF209 April 2016

www.linuxformat.com

LICENSING

Senior Licensing & Syndication Manager Matt Ellis matt.ellis@futurenet.com Tel + 44 (0)1225 442244

CIRCULATION

Trade marketing manager Juliette Winyard Tel 07551 150 984

SUBSCRIPTIONS & BACK ISSUES

UK reader order line & enquiries 0844 848 2852 Overseas reader order line & enquiries +44 (0)1604 251045 Online enquiries www.myfavouritemagazines.co.uk Email linuxformat@myfavouritemagazines.co.uk

THE MANAGEMENT

Managing director, Magazines Joe McEvoy Editorial director Paul Newman Group art director Graham Dalzell Editor-in-chief, Technology Graham Barlow LINUX is a trademark of Linus Torvalds, GNU/Linux is abbreviated to Linux throughout for brevity. All other trademarks are the property of their respective owners. Where applicable code printed in this magazine is licensed under the GNU GPL v2 or later. See www.gnu.org/copyleft/gpl.html. Copyright © 2016 Future Publishing Ltd. No part of this publication may be reproduced without written permission from our publisher. We assume all letters sent – by email, fax or post – are for publication unless otherwise stated, and reserve the right to edit contributions. All contributions to Linux Format are submitted and accepted on the basis of non-exclusive worldwide licence to publish or license others to do so unless otherwise agreed in advance in writing. Linux Format recognises all copyrights in this issue. Where possible, we have acknowledged the copyright holder. Contact us if we haven’t credited your copyright and we will always correct any oversight. We cannot be held responsible for mistakes or misprints. All DVD demos and reader submissions are supplied to us on the assumption they can be incorporated into a future covermounted DVD, unless stated to the contrary. Disclaimer All tips in this magazine are used at your own risk. We accept no liability for any loss of data or damage to your computer, peripherals or software through the use of any tips or advice. Printed in the UK by William Gibbons on behalf of Future.

Future is an award-winning international media group and leading digital business. We reach more than 57 million international consumers a month and create world-class content and advertising solutions for passionate consumers online, on tablet & smartphone and in print. Future plc is a public company quoted on the London Stock Exchange (symbol: FUTR). www.futureplc.com

&KLHI H[HFXWLYH RIÀFHU Zillah Byng-Thorne Non-executive chairman Peter Allen &KLHI ÀQDQFLDO RIÀFHU Penny Ladkin-Brand Managing director, Magazines Joe McEvoy

We are committed to only using magazine paper which is derived from well-managed, certified forestry and chlorinefree manufacture. Future Publishing and its paper suppliers have been independently certified in accordance with the rules of the FSC (Forest Stewardship Council).

Tel +44 (0)1225 442 244


E R T T O A N M M RE E O G NT3.C O C TT A GET FIT FAST IN 2016 RY BEST TECH NING, D MORE…

www.myfavouritemagazines.co.uk/t3


0

1


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.