76 minute read

Cheat sheet: DARQ

Next Article
UTM appliances

UTM appliances

DARQ

How can the latest technologies contribute to your business? Steve Cassidy unpacks the acronym to find out

Advertisement

I’ve been Googling for hours and I can’t find a single product that says it’s DARQ-compatible. Is this a real thing? Part of the problem is that DARQ shares its name with a fairly recent and quite popular video game, which gets in the way of any attempt at research. Moreover, the DARQ that we’re interested in isn’t itself a product or a company: it’s shorthand for a collection of four technologies, namely distributed ledger, artificial intelligence, extended reality and quantum computing (see the boxout “Don’t be left in the dark” below for a more detailed explanation of these terms). It’s not a coherent toolkit, or a necessary foundation of your projects. But if you can’t at least say something about DARQ then this might be an indication that your line-of-business apps are somewhat behind the times.

So is this like the old Codd & Date stuff for relational databases? I see you’re not quite as green as you are cabbage-looking. For those who aren’t aware, Codd and Date were pioneers of the database in the 1960s; they proposed simple tests and questions that could help determine how “relational” your database was. DARQ can be similarly used as a yardstick or sanity check for your project planning, but it’s really a much broader, more high-level approach. With four immense concepts compressed into a single four-letter acronym, it’s not really amenable to simple box-ticking exercises.

Is DARQ of any practical use to us at all, then? For sure: you simply need to compare your plans and systems to the broad spread of capabilities available across DARQ’s collected platforms. This should help you understand what you’re missing out on, and where your development and relationship time might be best spent. Your e-commerce system doubtless has a database in it of some type, but could it benefit from a blockchain-type model? Can you scale up a thousand times to accommodate a sudden flood of traffic just after an advert goes live, and then scale down again? What tools should you be using to achieve those desirable levels of stability and scalability? And so forth.

So we can think of DARQ as a problem-solving mnemonic? It’s a nice thought, but that final letter complicates things. You might find a solution in blockchain technologies that you can start developing today, or discover a pre-rolled AI suite that’s close enough to your problem to be useful. You might even discover some productive trick with QR codes or AR goggles. But quantum computing is more aspirational and forwardlooking: right now its applications are limited, and few businesses will see much return from rushing to invest in the emergent technology.

So ticking all the DARQ boxes could actually be a red flag? You’re right – just because an IT project control thoughtscape uses scoring to concretise your own assessments, that doesn’t mean the scores have any special value, or translate in any way to business benefits. “Perfect” technology scores can easily go hand-in-hand with a “nul points” rating from the people who have to use a system, or the shareholders.

Perhaps the value of DARQ is to remind us to be wary of hype. You might be onto something there. Thinking about DARQ certainly encourages more actual, well, thinking than relying on narrow web-page service stats or session counts. If DARQ hasn’t been on your radar, your team might welcome the new perspective. A good starting point is Accenture’s report on DARQ, which you can read at pcpro.link/ 335darq – in this case it’s much easier than relying on Google.

“If you can’t at least say something about DARQ then this might be an indication that your apps are behind the times”

Don’t be left in the dark

Distributed ledger technology basically means blockchain – probably not the specific one that underpins Bitcoin, but a custom chain that works in the same way. The idea is that truly collaborative manufacturing or project control works best when you maintain a secure, public record of who did what, and when. Artificial intelligence might be about trillions of simple little machines hunting for patterns in all that data you can’t throw away. Or it could be an expert system lurking in the background, taking in the big picture and watching for new opportunities, emerging trends or anything else. Each approach has its place. Extended reality is a bit of a cheat, acronym-wise (I guess DAEQ isn’t so snappy), but never mind. It’s all about 3D goggles and similar technologies. In order to work with the amounts of data you’re likely to generate, you’ll need some slick visualisation tools, plus red-hot warehousing so your cyberworkers can pick and pack faster. Quantum computing is where the acronym shifts purpose. The other elements of DARQ can be deployed tomorrow, but with quantum we’re more at the stage of just keeping an eye on things, so you can determine when and whether specific workloads can be shunted onto a cloud-based service to gain that mythical million-fold speedup.

Real world computing

Expert advice from our panel of professionals

JON HONEYBALL “The words I exclaimed next are not appropriate for polite company. My entire network segment had been laid bare”

If you want to know exactly what’s happening on your network, Jon holds the answer: a small box that’s stuffed full of analytical magic

There are few people I will allow onto my network. And even fewer who can plug things into it. It’s not that I am a little paranoid: I am wholly paranoid.

In the lab, we have a core network that’s a mix of trunk-mounted Ethernet ports and Wi-Fi access points. The Ethernet ports go to a patch panel, and then onto a big high-speed switch. Staff know that plugging something randomly into the sockets will result in alarm bells ringing, and I might have to be Rather Polite in their general direction. The Wi-Fi password is not shared.

While this might seem draconian, I should point out there are three other air-gapped networks with their own access points. One is used solely for testing of products; you never know what device might come through the front door preloaded with malware. And there’s a guest network, with its own ADSL line, for visitors and any wholly untrusted devices. And everyone needs a spare, just in case.

Nevertheless, I run the core network as one large LAN that spans five separate physical buildings and locations. I could segregate into separate sub-networks and bridge between them, but there are reasons for my madness.

There is, of course, a small group of people who have the necessary access in case I shuffle off this mortal coil. One is Geoff Campbell, who I’ve known for three decades. He’s now a technology consultant at Networkology (networkology.com), which specialises in serious networking stuff. Geoff was visiting a few weeks ago, and rather slyly dropped a small box onto my desk. He muttered, “I think you might like to play with that,” and I could tell from the wry grin that this was not just a boring old £20 Ethernet switch.

Before I dive into what it is, let’s go back a decade or two. In the past, we didn’t care too much about network traffic (the “we” being those people who were, and possibly still are, in charge of such things). That’s because we didn’t have much bandwidth on the LAN anyway. External connectivity was desperately slow. We made very sure that only those services that needed external connectivity had access to the outside world, because some of these WAN connection routes could come with excruciating monthly costs if services or users went rogue. But we could use the traffic LEDs on our switches to have a reasonable idea of which devices were chattering away. This was helped, of course, by the simplicity of networking back then.

It’s all very different now. We have so many services on our network, it’s largely impossible to do an eyeball guess of traffic rates. This is especially true of the wide range of autodiscovery chatter that is flooding around the LAN.

Of course, our toolbox hasn’t stayed locked in the realms of “LED flicker”, even though it’s all we tend to get on cheap home/SOHO switches and routers. Managed and semimanaged switches allow us some visibility on the traffic, even if it’s just a basic count of packets per second.

To get a good idea of what’s happening, you have to dig deep. The standard tool for this is to grab a wodge of the network traffic, usually using a mirrored port on a switch, and to suck the data into a tool like Wireshark. The problem with this approach is that you have to wade through a flood of traffic trying to look for the patterns, and this requires careful filtering.

More advanced switches, and network fabric, allow a better insight. I can see a lot of useful information on the various Ubiquiti networks we have, but it’s providing a sketch rather than giving me the whole picture.

Which brings me to Geoff’s small box. Called Allegro 500, from Allegro Packets in Leipzig (allegro-packets. com), it’s in the lower mid-range of its group of network multimeters. At the bottom end of the range is the 200, which is good for 2Gbits/sec throughput. At the top end is the 5500, which can handle 150Gbits/sec. The 500 has 4GB or 8GB of internal database memory, and 0.5TB or 1TB of ring buffer, and is good for 4Gbits/sec. The monster 5500 unit has up to 4TB of database memory and 576TB of ring buffer, which you

Jon is the MD of an IT consultancy that specialises in testing and deploying kit @jonhoneyball

“The problem is you have to wade through a flood of traffic trying to look for patterns”

BELOW The tiny little box can unlock all your network secrets

Jon Honeyball

Opinion on Windows, Apple and everything in between – p108

Paul Ockenden

Unique insight into mobile and wireless tech – p111

Lee Grant

Tales from the front line of computer repair – p114

Dr Rois Ni Thuama

Our guest columnist on a landmark legal decision – p116

Davey Winder Keeping small businesses safe since 1997 – p118

Steve Cassidy The wider vision on cloud and infrastructure – p122

can increase by another 704TB should you need. But this is the kicker: there is essentially no functional difference between the biggest and the smallest unit in the range. It’s just throughput capacity and storage.

So, what is the Allegro 500? The easiest way to understand it is to think of it as a simple transparent switch. Insert it between your LAN and your boundary router. The two Ethernet ports used have no IP addresses, because they are simply passthrough devices. There’s a monitoring Ethernet port, which you connect to your network, and it comes up on an IP address so you can connect to the management interface.

Geoff and I connected it up to a big segment of my LAN, powered it up and decided to go to the pub. We could have set up some mirror ports on the big switch, but that would have delayed the beer, and thus was clearly suboptimal.

Room with a view

After a couple of pints of the finest, we were back in front of a laptop and I logged into the Allegro web interface. The words I exclaimed next are not appropriate for polite company. But rest assured it was along the lines of “golly” and “blimey”.

My entire network segment had been laid bare. Not just by the usual source/destination IP address traffic analysis. No, that would have been too simplistic. The Allegro was providing real-time visibility of the entire ISO 7 layer stack. I could look at straightforward packet traffic, as you would expect. But then dig into Layer 2 at the Ethernet layer. Everything was there – by MAC address, QOS, packet size, ARP, VLAN, STP, MPLS, LLDP, PPPOE and so forth. All by device. All with data capture, analysis and real-time graphing.

Let’s move on to Layer 3, the IP layer. Want to see all the DHCP, DNS, NetBIOS, ICMP, multicast and IP traffic, sliced, diced and graphed in real-time? It’s a click of the mouse. Or we can take a look at Layer 4, the transport layer. Connections, TCP statistics, retransmissions, flags, response times, TCP window size and so forth. How about looking at Layer 7, the application layer? Want to see all the SSL statistics, HTTP, SMB, SIP, NTP, PTP?

Even better, every analysis at every layer and protocol has its own PCAP download button, allowing me to create a .PCAP file of just that data for feeding into a third-party tool such as Wireshark.

Here’s an example of just how far down this rabbit hole you can go. I went to the SIP analysis tools on the Layer 7 section. I initiated a VoIP call between two phone extensions, and hit the PCAP button for audio 1 and 2. After a minute or so, I stopped the session and Allegro dumped an MP3 file of the VoIP call into my downloads folder. I hit Play and was listening to the recorded phone call. And I could have PCAPed a file of all the SIP and RPT/RTCP data for packet analysis.

To look at SMB traffic, I dropped into the SMB statistics page and found a number of devices on the network that were still trying to use SMB v1. Further digging showed them to be the Axis security cameras, which were trying to push their video feed to an internal IP address share on a NAS. Which would be fine, except that this server hasn’t been there for some years, and I now use the excellent Synology Surveillance Station service. Evidently I had forgotten to dig into the cameras to turn this feature off. Of course, in doing this, I found that there was new firmware from Axis, so I spent a thrilling half hour updating that, too.

Think about it for a minute: how would I have found that this was happening if there wasn’t a tool that would sift through the flood of IP traffic and work out that there were a bunch of clients that were trying to make an SMB v1 connection to the IP address of a server that is no longer doing that task?

A tool like Allegro isn’t something you just drop into for five minutes, have a look around and then wander off. I’ve been rooting around this tool for a week now, and I’m still finding a wealth of new things I hadn’t spotted. It really is like going down Alice’s rabbit hole. And I have never seen another tool that gives me this level of data analysis coupled with this accessibility and utility. Now for the bad news. The larger of the Allegro 500 units costs €6,000 for the highercapacity unit, and €4,500 for the smaller unit. Which is not particularly pocket change. The Allegro 200 costs less, but I understand the big units run to “oh my gosh” price tags. But look at the context: the sort of tool that you want in a data centre, handling traffic

ABOVE The Allegro is inserted between your LAN and your boundary router

“I’ve been using this tool for a week now, and I’m still finding a wealth of new things”

BELOW Allegro produces a wealth of accessible data

analysis at 150Gbits/sec, isn’t going to be trivial. This sort of workload isn’t going to run as an app on your Samsung smartphone.

There’s an ongoing support cost too, as you would expect from a fully supported device. It’s €900 per year for the larger-capacity Allegro 500, and €675 for the smaller Allegro 500, which is par for the course for a properly supported product. And this one certainly is: Geoff and I found a bug, triggered by a misconfiguration error on our part. Geoff reported the bug at 9.15am. We had an initial response at 10.02am. We then had a second response confirming that it had been found and fixed at 10.39am. The fix is now in the test build of the firmware awaiting soak testing and then broader release. That is what professional support means.

Is the Allegro 500 for everyone? Absolutely not. However, if you want to know what is going on within your network, or that of a client network if you’re a network consultant, then this gives you a depth of insight that is eye-opening. And it does so within minutes of plugging into the network, usually on a set of mirrored ports from a core switch so the Allegro is not inline. Viewed in that light, it could pay for itself very quickly.

Suffice to say, I am still learning what it can do, and am tempted to buy one, just to satisfy the OCD networking nerd in me. If you want to know more, then email geoff.campbell@networkology.com and tell him I sent you.

New games computer

I decided it was time to buy a new PC. Well, to be more accurate, to buy a games PC because I haven’t had anything that was games-oriented for many years. Any recent interest in gaming has been satisfied by my iPhone, and I already have the games consoles market covered: purely for research, you understand, I bought a PlayStation 5 and an Xbox S, which was then supplanted by an Xbox X.

I had looked at buying a gamesoriented machine a couple of years ago, having been tempted by the latest 32-core and 64-core AMD processors, plus the then new Nvidia GeForce RTX 3090 graphics card. However, the lack of availability of the 3090 knocked that idea on the head, and my focus shifted to the consoles.

But it was time to look again, so I wandered over to the Chillblast website (chillblast.com). Chillblast has a good reputation with PC Pro readers, winning numerous awards over the years. You can build up a machine to your specification, but my attention span didn’t run to making all those choices. I decided to go for its top-of-the-range gaming computer that was available for next-day delivery.

The CPU is the Intel Core i912900K; I really wanted a modern CPU, and 11th generation Intel wasn’t going to cut it. The motherboard is the Asus TUF Gaming X690-Plus WiFi D4, which has a good range of capabilities, including decent wired and wireless networking. The PC ships with 32GB of DDR4 memory rather than the latest DDR5, but that’s fine for the time being. A 1TB boot drive and 2TB storage, both SSDs, would allow for rapid loading even of large games. And a 12GB Nvidia GeForce RTX 3080 Ti graphics card is there to provide 3D acceleration. Water cooling for the CPU seems to be all the rage, and there are more fans inside the box than you would get at an Adele concert. A price tag of £3,580 including VAT was somewhat robust, but I hope to get years of service from it.

For monitors, I went with the 32in Samsung G7 monitor, which is a large curved desktop monitor with good credentials, including high refresh rate support. A rather silly colour backlit keyboard and mouse completed the package.

Setup was quite straightforward at first, but I then ran into some roadblocks. The 3080 Ti wouldn’t work at all, and I had to use the basic motherboard-mounted Intel graphics adapter to get any image at all. There were lights on the GPU, so it was definitely getting power. I phoned Chillblast and almost immediately got through to a knowledgeable support engineer. He suggested pulling the card out and reseating it, just in case it had become dislodged in transit. I tried this, but it didn’t help. So I phoned back to the support line and got a different, but equally helpful, support engineer. He quickly decided that the GPU card might be faulty, and dispatched one by courier for overnight swap-out. When the replacement card

ABOVE The Allegro range covers most networks’ needs

“Performance is really quite mental. Flying around in Flight Simulator is huge fun”

BELOW Chillblast PCs use industry-standard components

arrived the following day, I did the swap-out, with the courier taking away the suspect card. Booting up the machine gave me a working GPU, so I was a very happy chappie.

Performance is really quite mental. Flying around in Microsoft Flight Simulator is huge fun, and I can have all the settings set to maximum. I ran some benchmarks in 3DMark and got 82,558 in Night Raid, 18,934 in Time Spy and 9,739 in Time Spy Extreme, all of which are pretty fine results. Of course, none of this helps when you have a 130GB update to Flight Simulator to download, and I was grateful for my gigabit fibre internet connection.

Well, until I decided to check for BIOS, firmware and other updates. We must remember that Chillblast is using well-known industry-standard components to build your computer. This should be compared to, for example, a Dell computer, where Dell itself builds an updater tool. This would normally allow me to update everything – drivers, firmware and so forth – from one place.

Of course, I needed to go to the Asus site to get its updater tools. At this point, I entered a whole world of pain. It would be hard to create a more miserable, confusing and opaque set of tools, but Asus gets the gold star. Battling with this took a lot of time, and hassle, and my confidence in Asus has taken a battering. This is somewhat disappointing: long-term readers will remember how, in the late 1990s, I imported an Asus motherboard from the USA and fitted it with two Pentium Pro 100 CPUs. If I remember rightly, that cost £800 for the board and £800 each for the CPUs. It was a Windows NT machine that absolutely screamed.

Maybe I was just looking in the wrong place, but I was in the support page for this new motherboard. The management tools are a mess, and Asus really needs to sort this out. Keeping firmware up to date is very important when it covers not just the motherboard, but integrated services such as Wi-Fi, Ethernet, sound and most everything else. I’m not blaming Chillblast; its service has been excellent. But I expected more from Asus.

jon@jonhoneyball.com

PAUL OCKENDEN

“If you try to upgrade in situ you’ll just end up with a shedload of things that won’t work properly”

Paul helps a reader with a poorly Raspberry Pi, and then starts identifying the birds in his garden using technology

Ireceived a question this month from a reader (who wishes to remain anonymous) who is having lots of problems upgrading his Raspberry Pi 400 to the latest OS version. He’d tried to follow several websites advising on the steps needed, but had ended up with a Pi that kept giving configuration errors when trying to do stuff via the command line, and where the mouse had become very sluggish and kept overshooting when using the Raspberry Pi desktop. He wondered what the best course of action would be in this case.

First, there’s no need to feel bad about being in this situation. I’ve been there myself, including the sluggish mouse thing, which sometimes seems to happen when trying to upgrade from Buster (version 10) to Bullseye (version 11).

Unfortunately, there’s only one sure-fire way to recover from this and that’s to re-image the SD card in your Pi and start again from scratch. I know that’s a real pain, especially if you have lots of software installed on the machine, but it’s the only option when your Raspberry Pi goes haywire like this. Otherwise, if you try to upgrade in situ you’ll just end up with a shedload of things that won’t work properly and broken dependencies. You’ll also have lots of pain when you try to do future updates.

For all that people complain about Windows Update, and to a lesser degree the update process on a Mac, in my experience Unixy-type systems just can’t compete. I always laugh when people say things like, “I wish Windows had apt-get available.” Yes, it’s fine for small regular incremental updates within one major operating system version, but if you leave too big a gap between updates, or try to upgrade to a newer base version, then you’re often lining yourself up for a world of pain. People will tell you to just edit this word in a config file and it will all work perfectly, but I’m here to tell you that so often it won’t.

If you find yourself in this position, I’d very strongly recommend starting over. It’s a bit like how Windows was a decade or two back – it was always best in those days to completely reinstall the operating system every year or two, to keep things running fast and smooth. Windows is a lot better now, and will usually avoid getting itself into a mess, but the various flavours of Unix haven’t quite caught up.

So my advice to our shy reader is to download the latest copy of the Raspberry Pi Imager from raspberrypi.com/software. Please don’t use NOOBS, or New Out Of the Box Software to give it its full name, which was the previous SD card-based installer – NOOBS is no longer supported and the Imager is so much better.

Another top tip is to make sure you’re using a decent microSD card for this. If the one you’re currently using is a no-name brand, or was labelled something good such as SanDisk but was suspiciously cheap when you bought it, or if the card has previously had lots of wear and tear (perhaps sitting inside something like a security camera), then chuck it in the bin and buy a new one. SD cards aren’t expensive, but make sure that you’re buying from a reliable source as there are so many fakes around these days. Even on websites such as Amazon you need to ensure that the

Paul owns an agency that helps businesses exploit the web, from sales to marketing @PaulOckenden

“Windows is a lot better now, but the various flavours of Unix haven’t quite caught up”

BELOW A Pi update resulted in one reader’s Pi 400 acting very sluggishly

seller (which isn’t always Amazon) is reputable. Early Raspberry Pis had limitations on what size cards could be used, but the Pi 400 doesn’t. Still, unless you’ll be using the machine for massive documents, video editing or audio work, then a 16GB or 32GB card is probably fine.

From within the Raspberry Pi Imager you get a choice of operating systems that you can download. The default is a 32-bit copy of Debian with the Raspberry Pi Desktop software installed. It’s a good choice for many people, but perhaps not the best one for the reader’s Raspberry Pi 400. If you click on the “Raspberry Pi OS (other)” you’ll see several options. The main 32-bit OS is available in both “lite” and “full” options, and there are also things such as 64-bit versions and legacy versions (which are basically Buster rather than Bullseye).

The 64-bit versions are fairly new, having only came out of beta this year, and they have advantages for some applications (I’ll come on to one in a bit), but for most normal users, right now, I’d recommend sticking with the 32-bit OS versions. That advice will probably change in the future, but you’re less likely to run into problems for the moment.

The Raspberry Pi 400 is the “Pi 4 built into a keyboard” model, and it’s the one that people often use as a lightweight desktop replacement. For this reason I’d recommend ignoring the default OS suggested by the Raspberry Pi Imager and instead select “Raspberry Pi OS Full (32 bit)”. The full version includes the various recommended applications, including the Libre Office suite and Claws Mail, as well as a bigger range of packages that will be useful when installing other software. The full install is much better suited to a Pi that’s going to be used in a desktop environment.

There’s an option within the Raspberry Pi Imager to preset things such as your wireless network details (click on the small cog to access this), but I’ve found that it doesn’t always work so I normally ignore it; you still get the chance to set up the Wi-Fi and other stuff on the initial boot of the machine. At this point, it will also go off and grab the latest updates because the files that the Imager uses aren’t always bang up to date.

Once you’re up and running, it will be like having a new machine. You might also find that Bullseye seems a bit faster than Buster – that’s true with new machines such as the Raspberry Pi 400, which is built on the Pi 4 platform, but you might not notice any speed increase with some older Pis.

My final piece of advice is to keep the OS updated regularly. If you only switch the machine on once every six months, you’ll find you might have missed several updates, and it’s in cases like that where things can start to go horribly wrong again. Regular updating is the key to a happy Pi.

Four and twenty blackbirds…

People are becoming ever more interested in attracting wildlife to their gardens, balconies or window boxes, and identifying those species that do pay a visit. I’ve written before about using wireless security cameras as wildlife cameras when aimed at a nest box or bird feeder – some of them are ideal for this, and much more convenient than a conventional wildlife camera (often known as a trail camera). With the latter you’ll have to regularly open the camera, take the memory card out and download any photos or videos. With a wireless security camera, however, everything can be uploaded in near real-time to the cloud, and you can even get alerts or emails for interesting things happening on your feeders or in your nest boxes. Some wireless security cameras even come with solar panels, so can be mounted in hard-to-reach places, though I once had a squirrel chew through the cable connecting a camera to its solar panel! Identifying bird visitors is great fun for kids, and both the RSPB and British Trust for Ornithology run separate schemes where people count the birds visiting their garden, one called Birdwatch and the other, somewhat confusingly, called BirdWatch. People’s Front of Judea, anyone?

Wildlife watchers usually try to identify species visually, and that’s great if you have a degree of expertise, but it isn’t always easy: a junior such-and-such might look much the same as an adult female this-and-that. An alternative is to identify the birds from their calls, and this is a job that’s perfectly suited to a bit of automation.

There are quite a few things out there that do this, but one seems to stand head and shoulders above most of the others. The starting point is something called BirdNET, which you’ll find at birdnet.cornell.edu. BirdNET was created and is run by the Ornithology lab at Cornell University, New York, and it uses neural networks and machine learning to identify bird species within sound clips. You can run the full version of BirdNET locally if you want to. The source is available at pcpro.link/ 335birdnet, but an easier introduction is simply to download the BirdNET app, which is available for both Android and iOS within the appropriate app stores. The app can seem clunky at first, but once you get the hang of the fact that you need to separately record and analyse the clips it all starts to make more sense. There’s actually a much easier to use app called Merlin Bird ID, which is also available from Cornell Lab, but stick with the BirdNET app if you can because it leads on perfectly to something that’s really quite fun.

ABOVE The Pi Imager will write a new OS to a microSD card

“I once had a squirrel chew through a cable connecting the camera to its solar panel”

BELOW The BirdNET Android app isn’t particularly intuitive

…baked in a Pi

That fun thing is BirdNET-Pi, and as you can probably tell from the name it’s BirdNET running on a Raspberry Pi. I’m using a Raspberry Pi 4B, but it will also run on a 3B+.

I’d recommend a Pi 4, if you can. Importantly, it needs the 64-bit version of Bullseye; this is the exception to the 32-bit rule that I wrote about earlier in the column.

Unlike the BirdNET app, BirdNETPi runs continuously, so you just leave it running 24/7 and it will tell you which birds have visited. Well, the noisy ones at least.

You’ll need some kind of sound input. This can either be a USB sound card with a microphone plugged in, or just a USB microphone. The quality of the microphone can make a big difference, and if you Google it you’ll find great threads from people arguing about the best microphone to use. I just went for a £14 conference room mic (pcpro.link/335mic), which I’ve fed through a window and left sitting on a windowsill outside. It works reasonably well but it’s by no means perfect: first, it’s very susceptible to wind noise (which, amusingly, BirdNET-PI identifies as a Whiskered Tern); and second, it’s not waterproof, so I’m sure it’s going to die horribly when we have the next storm.

In time I’ll buy a better microphone and fix it to the underside of something so that it won’t get rained on. I’ll also make sure that the better mic comes with a windsock.

You’ll find a website for BirdNET-Pi at birdnetpi.com. Although this gives you instructions on how to install it, there’s little other information – the website doesn’t even explain how to run the thing. So let’s cover that here.

After the install you’ll find that your Pi will reboot, but when it comes back up there won’t be any sign of BirdNET. If you use the Pi Desktop there will be no new applications installed or anything like that. Instead, you need to fire up a web browser and point it to birdnetpi. local – this can be on your Raspberry Pi itself if you have a display attached, but also from any machine on your local network. Some parts of the system are protected by basic authentication, so if a login box appears you’ll need to enter the username “birdnet” and a blank password. You can set a password for better security in the website’s settings page.

The first thing you should do is plug in your USB microphone, switch to the spectrogram page, and sing or whistle some high pitch notes. The spectrogram doesn’t update in real-time: it records chunks of a few seconds and then displays it. But if you can see your notes appearing on the display when it updates, then you can be sure the Pi is seeing the microphone properly.

The next step is to click on Tools, Settings, and put in your latitude and longitude. This is because the AI in BirdNET has an idea of where particular birds can be found, so with the right location set you’re more likely to get accurate results. There’s a link from that page to a helper at latlong.net, which will help you find the values for your location. You just need to be aware that the latlong site shows six decimal places but BirdNET Pi accepts only four, and won’t allow you to save if you just copy and paste the full six. You’ll need to round off those last two digits.

You’ll see on the Settings page that you can also set the system to email or tweet you when it detects a new species, but that’s something to set up once you’ve had it running for a few days. For now, just scroll down to the bottom of the page and click Update Settings.

While you’re here, there’s one more thing I’d recommend: head into Advanced Settings and change the recording length from 15 to 30 seconds. When it’s running, BirdNETPi continually records chunks of audio and then analyses each one for bird calls, and some birds can have quite long sequences of calls, so with 30-second chunks rather than 15 I find you’re more likely to get an accurate identification.

That’s pretty much it. From there you just need to leave it running, doing its thing. What’s great is that for every species it detects it will show the spectrogram, and you can also play back the recording. If you click on the View Log page you can actually see what goes on under the hood. For every recorded sound clip, BirdNET will return a number of possible matches, each with a probability. Many of them will be silly suggestions, but with very small probabilities. By default, BirdNET-Pi will only record matches with scores of 0.7 or better, but you can tweak that on the Advanced Settings page if you want.

It’s great fun for adults, and both entertaining and educational for kids, too. When you get more into it you can start uploading your data to websites such as app.birdweather.com, but for now just sit back and enjoy letting a Raspberry Pi help identify the birds visiting your garden. As I write this, for the past couple of days our garden has been home to a murmuration of starlings, and so they are dominating the various charts and graphs that BirdNET-Pi produces.

ABOVE BirdNET-Pi shows a spectrogram for each detection

“It’s great fun for adults, and both entertaining and educational for kids”

BELOW BirdNET-Pi gives you lots of data about your garden visitors

LEE GRANT “I possess a passion for manuals and technical documentation that fuels my omniscient façade”

Lee diagnoses a PC that appears to work best on its side, before cursing manufacturers (hello, Dell) for using non-standard components

Isuppose there is an expectation that a computer shop should know its onions. Certainly, many of my clients would testify that I have an encyclopaedic store of data that provides instant recall to a multitude of solutions for whatever smouldering pile of broken tech comes through our door. In reality, I can barely remember my own... er, I’m sorry, have we met?

What I possess is a passion for manuals and technical documentation that fuels my omniscient façade and fills the brain gaps where once-known knowledge has vanished. 2012 doesn’t seem that long ago, but the technology of that era was different, and that’s where Nathan’s machine starts.

He’s one of those kids who loves tech and will spend hours on YouTube watching influencers demonstrating their latest “purchases”. Nathan works hard at school and was rewarded with a GTX 1660 GPU and some RAM to perk up the gaming prowess of his PC, a Troughton i7 with MSI motherboard and a couple of SSDs for extra pep.

It was a reliable little unit until Nathan installed his new technology. Dad also had a poke around, making extensive notes that I won’t repeat here as we’ll crash right through the guest column and into Davey Winder’s opening paragraph. In short, they’d pulled the CMOS battery and got one reboot before the battery needed pulling again. They’d worked out that a BIOS flash could help, but someone had installed the final BIOS back in 2013, closing that avenue of diagnosis. RAM sticks (and there were now four) were cycled, but still, nothing. Dad’s last comment was that the machine seemed to respond better when it was on its side, rather than upright, and let me assure you it’s a desperate technician who inverts a PC to see if that solves the problem.

For those that like to play along at home, I’ll step you through the troubleshooting process as I’m going to ignore technician’s rule #1: recreate the issue. Usually I’d fire up the machine and observe its behaviour, as it’s not uncommon for a PC to behave the moment they’re plonked on the workbench, but I’m concerned that the “machine works better on its side” issue may be an electrical short. A visual inspection doesn’t reveal any obvious signs of an electrical fault (blown caps or scorch marks), and the 850W PSU offers more than enough juice to power a GTX 1660 card.

I pull the panels, cut the cable ties and examine the plugs for loose connections. All seems well, so I attack it with a screwdriver until the motherboard is out and ensure nothing was trapped behind it. There wasn’t, and the motherboard spacers screwed into the baseplate were also correctly aligned. A visual inspection of the motherboard told me that the GPU, CPU and heatsink were okay, the clump of RAM sticks were firmly seated and no fragments dropped from the PSU when I gave it a rattle.

With no obvious short presenting, I put the case and SSDs to one side, connected the bare components on my workbench, and turned it on. It didn’t go bang, but it didn’t go beep either. A few fans shrugged as we waited for the display image, which wouldn’t come. A useful tool at this juncture is a Piezo speaker, which used to be included with PC cases and was attached to the motherboard to sound a beep at POST (Power On, Self Test). They’ve fallen out of fashion but can be picked up for a few pounds and, at the very least, will give an indication if the motherboard has died. I wired one into Nathan’s machine, pushed the power button and was deafened by silence. The next step was to shock the motherboard by doing something unexpected. On the proviso that it’s not dead, firing up the motherboard without RAM will outrage it into beeping loudly in protest. Nathan’s did, which was a positive sign the motherboard had some functionality and that my attention needed to focus on what was stopping the POST process.

Lee Grant and his wife have run a repair shop in West Yorkshire for over 15 years @userfriendlypc

“It’s a desperate technician who inverts a PC to see if that solves the problem”

RIGHT Here’s a clue to the source of Nathan’s woes (or at least, one of them)

Swap shop

I replaced Nathan’s new GPU and random RAM collection (is that RRAM?) with a test GPU (an Nvidia GT 710) and a basic stick of DDR3. The machine displayed a POST screen, and I performed the secret happy-dance known only to PC technicians. But you may wonder why I didn’t pull these components earlier.

There were a few routes to arrive at this point but, when trying to recreate a fault, it’s important not to change too many things at once, so I swapped out my test GPU for Nathan’s shiny GTX 1660 and watched in joy as the POST screen repeatedly appeared. This left me with a pile of mismatched RAM sticks to work through: Nathan’s old RAM and his two new sticks of Patriot Viper Xtreme DDR. Oddly, they were marked as a quad-channel kit, which means they’re sold as a matched

set of four DIMMS, but someone is obviously selling split sticks.

Installing this RAM back into the machine stopped the POST process and, to resurrect it, required a CMOS reset, just as Nathan’s dad had said. So, the fault was leaning towards RAM compatibility, and a dig around in the product specs gave me a clue. The new memory was designed to be overclocked and, to facilitate this, the manufacturers designed the DIMMs to run at 1.65V. Nathan’s motherboard, having endured countless resets to default settings, only pumped 1.5V to the RAM, and that missing 0.15V made the difference. For the voltage curious, new-fangled DDR5 requires between 1.1V and 1.25V, so much more computational power for less energy.

Using my test RAM, I adjusted the RAM voltages in the BIOS, saved, shut down, swapped-in Nathan’s RAM and then pressed the on button. Voilà! Due to speed differences, I decided not to use Nathan’s old RAM and left him on 8GB. Interestingly, upping the RAM speeds to their 1,866MHz levels via manual tuning or by invoking XMP brought the machine back to its knees, so 1,333MHz was as good as it was going to get.

I applaud Nathan and his dad for giving it a whirl, a fabulous effort, but matching new RAM to a motherboard that first fired up a decade earlier was always going to have compatibility risks. Remember, sometimes getting hardware working is the best achievable result; getting something working perfectly may be impossible.

Not a fan

It’s not unusual for the most obvious faults to be extremely well hidden. Jane’s office machine kept shutting down at random points during the day. She works in media branding and although she spends a lot of time slicing and splicing video for TikTok, it’s nothing that would give an iPad much to sweat about. Using isolation techniques, I stabilised her PC using the Ryzen’s on-board graphics instead of the dedicated GPU, but as her graphics card wasn’t dead, I dug further.

There’s something hypnotic about watching a cooling fan spin, especially Jane’s GPU cooler as it rotated with cool aloofness and no real dedication to the task. I removed a few screws, and the fan fell to pieces: the rotational bearings had shattered. Sometimes it’s possible to cobble a replacement together from spares and the remnants of the original, but the Zotac fan was a sealed unit, taking repair off the menu. I rummaged in my tub full of GPU fans and none of them matched the Zotac in terms of fixings alignment, so I had to ship something from China to repair the graphics card.

This gave me plenty of time to wonder why GPU cooling fans can’t use standard mounting points like desktop case fans. Although the fan’s design and layout could be unique to that card, the holes to thread the mounting screws through could easily be standardised, irrespective of rotational speed and rates of air displacement. I’ve been doing this job too long to be surprised by wonky screw holes, but as manufacturers (hopefully) begin to design products with repair in mind, we may see standardisations that don’t impact on functionality. One manufacturer that I hope learns this lesson well is Dell.

We don’t have enough space in this magazine for me to explain the contempt I have for proprietary components and fittings, so here’s a simple example. A few months ago, a Dell Optiplex 9020 arrived that seemed to suffer from a lack of power. Once some basic visual checks had been made, the first fence to clear was testing the PSU, but this Dell had hidden a trap inside Becher’s Brook.

The motherboard had a modified 8-pin power socket, rather than the usual 20+4-pin ATX standard. The Optiplex 9020 has several form factors within its range, and this bizarre socket allows Dell to fit non-standard PSUs inside bespoke shaped cases. The Dell machine doing nothing on my bench actually used a standard-sized midi tower, but a technical decision had been made to save as much cash as possible in the manufacturing process by utilising non-standard connectors.

The PSU was rated at 290W, and a quick glance at the spec showed that the +12V rail was only rated for 14A, which is about 50% of a standard PSU. It’s an energy-efficient design that delivers a gut punch for the customer as the bespoke socket removes the owner’s options to add a more power-hungry GPU or to throw in some additional storage drives. Even a fundamental PSU failure requires this exact Dell part, which runs to £75 for a new unit (compared to less than £25 for a 400W Kolink).

To frustrate the customer a little more, my suppliers had a 30-day wait on delivery. To allow me to move forward with this repair, I ordered a converter cable, the Acme 3000 Fire-Hazard Pro exclusive to eBay, which allowed me to turn the Dell’s eight pins into 24 and eliminate the PSU from my enquiries. We found the motherboard guilty of all charges.

This sort of voltage converter cable is fine for testing, but not ideal for the long term. In our industry, fire is rare, but when it happens, the source of the conflagration is usually the cheap-aschips Molex-to-SATA power adapter used to hook in drives when the PSU has too few SATA ports. Only a few weeks ago, a smouldering PC desktop arrived with a burnt optical drive caused by one of these adapters. The fire had melted the cables on the PSU, killing the machine, and if this tale of using cheap converters to get power into the components of your PC rings any bells, then please think about getting a PSU with the correct number of native ports. PC Pro readers are extremely valuable commodities, and we want to look after you.

ABOVE A shattered fan can be hard to replace as mounting points aren’t standard

“I’ve been doing this job too long to be surprised by wonky screw holes”

BELOW A cheap power adapter can be a fire hazard

Guest columnist

DR ROIS NI THUAMA “The days of escaping legal liability as a result of poorly configured code are gone”

Vulnerabilities in Boeing’s software led to two fatal crashes and a landmark decision that will affect anyone involved in software development

In case law there are a few times when a single landmark decision reshapes or reframes the legal landscape. At the tail end of last year, that’s exactly what happened – and anyone involved in software development should take note.

The event was a memorandum decision from the Court of Chancery of the State of Delaware, which framed its response to a matter relating to software vulnerability. I believe this decision, which caused the defendant firm to settle for an eye-watering quarter of a billion dollars, will change the commercial landscape for good.

Up to this point, there have been numerous instances where vulnerabilities (vulns) in IT systems have been left unaddressed, yet directors have managed a lucky escape. But times are changing.

There are two key learnings for anyone involved with software development. First, shareholders, investors and their lawyers are now equipped with a better understanding of exploits and vulns and the steps that should be taken to address known significant threats. Second, they are no longer prepared to stomach losses when management failed to exercise reasonable care, skill and diligence.

This combination of a better understanding coupled with investors no longer prepared to weather poor decision-making means the days of escaping legal liability as a result of poorly configured code, failing to address reasonably foreseeable vulns or overlooking known threats are gone. Shareholders are awake and class actions will follow. In other words, management’s luck has run out.

This decision will usher in a new sense of urgency for businesses to adopt global industry standards as a minimum, to address the reasonably foreseeable vulns and exploits. From now on, you must avoid the avoidable or face the consequences.

Rois Ni Thuama PhD is an expert in risk mitigation and head of cyber governance at Red Sift. @rois_cyberstuff

“Shareholders are awake and class actions will follow. Management’s luck has run out”

RIGHT A design flaw in Boeing’s 737 Max caused the aircraft’s nose to rise upwards

Cases that changed the world

In 1897, a boot and shoe manufacturer called Mr Salomon restructured his business to facilitate his four sons’ concerns that they be considered more than mere servants. The new structure would expand their interest in the business to give them part ownership.

To facilitate this new arrangement, the business was incorporated as a limited company. When the company ran into financial trouble, the debt, it was decided, belonged to the company alone. Whatever debt could not be satisfied by the firm remained unpaid.

In other words, the loss remained with the lender. The lenders could not recover their debt by pursuing the members of the business. The concept of the company as a legal person separate and distinct from its members was considered good law and would be followed not just in England and Wales but beyond into other common law countries where it was considered persuasive precedent (Salomon vs Salomon, 1897).

Thirty-five years later, a seminal case that originally began under Scots Law – Donoghue vs Stevenson – found its way into the House of Lords. Mrs Donoghue joined her friend in a cafe in Paisley one Sunday, was treated to a drink, and discovered to her horror the remains of a slug in her ginger float. As the claimant was not party to the contract the Lords considered whether the woman who fell ill after drinking this cocktail was owed a duty of care by the manufacturer.

In their decision, the Lords reasoned that manufacturers owe a duty to the consumers who they intend to use their product, not merely those who purchase it. The case launched a thousand lawsuits. More accurately, it probably launched hundreds of thousands of lawsuits, if not millions. And a whole new area of law, the tort of negligence, was born.

These are only two cases where legal decisions have had wide-ranging ramifications for the commercial world. It’s no exaggeration to say that the recent Boeing case, which we will come to shortly, will have a profound effect on directors, investors and insurance companies, and we will see the rapid acceleration of industrystandard practices around the world.

Lucky escape for Sony

In November 2014, Sony Pictures Entertainment (SPE) fell foul of a cyberattack launched by a nationstate actor. Shortly after the attack began, SPE ground to a standstill. Half of its personnel couldn’t access their computers, while half of its servers had been wiped of all data.

Sensitive information relating to contracts was released into the wild and unflattering comments contained in emails found their way into the media. Five films that were set to be released were made available on the internet. The business disruption, financial losses and reputational

damage are difficult to assess but are likely to be astronomical. It would be difficult to calculate revenue lost because of the early release of five films and impossible to calculate losses arising from any work in development or pre-production that was destroyed or deleted.

The prevailing theory is that North Korea used a phishing email to gain entry, though it has been suggested that it would have been as easy to have someone enter the site and launch the attack in person. According to experts, the physical security brought in to assist post-attack was woefully inadequate. Once they had gained access, poor security and the absence of basic cyber hygiene allowed the bad actors to run amok in SPE’s systems.

It wasn’t all bad luck for SPE: its corporate structure meant investors were a degree of separation away from the targeted entity. SPE is part of Sony Group and it is Sony Group that is listed on the New York Stock Exchange. Whether investors would have been more active if they had felt losses directly is difficult to say.

SPE also benefited from the timing. In 2014, investors were easier to hoodwink. Attributing the attack to a nation-state actor gave the whole debacle an air of inevitability. What can you do if attacked by a nation state?

Well, for one, you could start by taking reasonable care and doing the minimum, such as implementing a sound physical security policy and making sure that you address known significant cyber threats such as phishing emails, right?

Right. This is where the rubber meets the runway.

Shareholder class actions

The facts surrounding the Boeing case are desperately sad. For the purposes of corporate law and shareholder class action, that nearly 400 people were killed in two separate incidents does not form part of the submission or the reasoning. It is mentioned here because it seems callous to overlook the human cost of this corporate error.

The opinion itself runs to over 100 pages. This is intended as a snapshot of what happened, why the claimant shareholders succeeded in their claim and finally what directors should do to avoid liability.

A piece of software, the Maneuvering Characteristics Augmentation System (MCAS), was designed as a workaround to solve an engineering problem. This problem had been baked in when Boeing, in its haste to keep pace with a competitor, rushed the technical drawing phase. Boeing’s new plane, the 737 Max, would have a larger engine. But that shifted the plane’s centre of gravity, causing the plane to send its nose skywards.

Rather than return to the drawing board, MCAS was born. This would lift the tail and push its nose down. The software was triggered by a single sensor. Boeing knew this sensor was vulnerable to false readings.

Astute readers will be very worried. You’ve just read that the sensor is a single point of failure and that it was known not to work properly. On both occasions, minutes after take-off, after experiencing difficulty the pilots searched the handbook, followed best practice but could not regain control of the plane. No-one had explained this problem to pilots or regulators.

As a result of these air disasters, Boeing suffered significant business disruption. The entire 737 Max fleet was grounded. This resulted in financial losses and reputational harm to the company. In reaching its decision, the court reasoned that the directors demonstrated a complete failure by failing to establish a reporting system or failing to address known significant problems.

Any corporation that fails to implement either or both of these elements listed above – and suffers business disruption which leads to financial losses or reputational harm that adversely affects the firm or its share value – is fast-tracking its way to shareholder class actions. With respect to bad coding, unaddressed software vulns or cyber threats, this is the matter that lawyers will look to.

Lessons for us all

Protecting your business from myriad threats may seem daunting, but there are good general rules to follow. To quote a report from the Marchioness Disaster in 1989, “the purpose of risk assessment is to try to assess relevant risks in advance so that appropriate steps can be taken to put measures in place to eliminate or minimise them”. It is elementary risk management to address known vulns or threats, whether it’s in a widget, policy, protocols or code.

So what sources might IT directors, engineers or consultants rely on to assist them in avoiding liability? Building in defensibility is about taking reasonable care.

That means any information that is revealed by a reasonable search ought to be addressed – or there must be extensive contemporaneous notes justifying the decision not to implement and providing details of the decision maker(s).

For example, IT directors would do well to implement a well-regarded framework, such as that provided by the National Institute of Standards in Technology. Moreover, they should ensure they read reports directed at their sector. For example, Interpol wrote a report for healthcare agencies across Europe warning about ransomware and advising of solutions and mitigation techniques 12 months before the Irish Health Service Executive was targeted in the Conti ransomware attack. Similarly, the National Cyber Security Centre published Cyber Threats to the UK Legal Sector. Whatever sector your business is in, directors can’t afford to ignore this accessible information.

The single takeaway is this. It’s not the unknown or unknowable events that could represent an existential threat to your business: it’s what you do know but don’t act on that could be the most painful for all.

ABOVE Sony Pictures avoided a class action lawsuit after its systems were hacked

“It’s what you do know but don’t act on that could be the most painful”

BELOW The entire 737 Max fleet was grounded following two fatal crashes

DAVEY WINDER

A passwordless future…

“Why are the stupid rules there in the first place? Because someone had to tick a compliance box”

Passwords aren’t going anywhere, Davey is saddened to report, but that hasn’t stopped him dreaming of a passwordless future again

Ihate passwords with a vengeance. In the main because they are so badly abused, from a security perspective, by so many people. I’m not just talking about the person on the Clapham omnibus who keeps their passwords simple and shared between multiple accounts and services, but service providers as well.

In 2022, I would have liked to think that the days of stupidly short character limits, along with rules forbidding special characters, would be long gone, but that’s not the case. Yes, Virgin Media, I’m looking firmly in your “email passwords can be no longer than ten digits and contain no special characters” direction.

Of course, Virgin Media isn’t the only outfit that’s in strong password denial. It’s still possible to find those who see password creation as some kind of Krypton Factor challenge where you have to use at least one number, upper case and special character, except for certain banned special characters of course, oh and no repetition, all within a given maximum length password.

Not only is this daft, it’s also insecure: it makes it easier for those who would crack your password to do just that. If I know the maximum length of a string and the formatting rules of that string, well, it becomes a lot less time-consuming for my password-cracking tools to discover.

And why are these stupid rules there in the first place? Because someone, at some point before login security hygiene realised the error of its ways, had to tick a compliance box and that legacy has never gone away. This gets even more bizarre when, in the case of Virgin Media email accounts, you look at its own recommendations for creating a strong password, which includes things it won’t let its own customers do. Things such as using more than ten characters (“your password will be more secure and harder to crack, the longer it is”) or special characters (“strong passwords include... symbols (? / !) or special characters (@ + ~)”), for example.

That particular password advice page (pcpro.link/335virgin) gets it wrong when it says you should aim for “8 – 12 characters” for password length. The days of such a short password string being considered secure are long since gone; I use 25 as my secure baseline now, and certain high-value accounts will get ramped up to 50. Where the Virgin Media advice gets it right is that using a password manager makes this a lot easier to accomplish, not only in terms of creating a random, long and secure password in the first place but being able to use them without being some kind of memory savant. Well, not use them if you are one of Virgin Media’s customers, obviously. Another bit of correct advice – to use two-factor authentication as a double-lock – is blunted somewhat by the fact that it doesn’t support this either.

Davey is a journalist and consultant specialising in privacy and security issues @happygeek

BELOW It’s virtually impossible to create a strong password for a Virgin Media account

This is where Apple, Google and Microsoft step forward in an unlikely alliance against password insecurity. The basis of the announcement, which can be found at pcpro.link/335alliance and was made by the three tech giants simultaneously, is to rid “password friction” by moving closer, more quickly, to a passwordless future.

As I’ve said time and time again, password managers are your friend; your very secure friend. Unfortunately, while password manager usage has taken off with tech-minded users, the general public considers these apps a step too far. Why so? Friction. It’s much easier and takes less time simply to use that weak password everywhere. Until the inevitable day arrives when doing so leads to a data breach or worse, when things come tumbling down around them.

The conclusion: better security and stronger password hygiene will only become something approaching any kind of norm if it comes with as little friction as possible. Hence the move by these three software and services behemoths to commit to a joint effort that extends support for a common passwordless sign-in standard.

That standard is the Fast ID Online Alliance (or FIDO Alliance), which uses mobile devices to authenticate apps and websites instead of passwords. The most important part of this “passwordless pact” is that this will happen cross-platform rather than have a proprietary lock. The idea is that you will be able to, for example, log into an account on your laptop using your phone (assuming it’s in range) by tapping an automatic notification asking if that’s you trying to sign in. Or, at worst, scanning your fingerprint, using Face ID or entering a PIN. I’m all in favour of this move towards less friction – note the distinction between less friction and frictionless – crossplatform methodology to provide stronger authentication for people who don’t understand what good security is, let alone care. Using your phone as a passkey store makes perfect sense from the something you have, something you are, something you know perspective: smartphone | biometric

| PIN. An iPhone user is already used to using Face ID, as are Android users with fingerprint scanning and laptop users with Windows Hello.

Sure, it’s not perfect. Nothing is ever perfect, and that is truer in security terms than most things. However, if a threat actor needs to have physical access to your smartphone and your login username and your face (or fingerprints or PIN), then that’s a pretty secure scenario for the vast majority of users and use cases. If you’re an outlier in terms of risk then the chances are you’ll already be using strengthened authentication measures anyway.

As my friend, Jake Moore, a former digital forensics police officer and current global cyber security advisor at ESET, said: “It is encouraging that Microsoft, Google and Apple are attempting to pave the way to make account access secure as well as

convenient. This isn’t something that can be achieved overnight, but it highlights that more needs to be done when it comes to password security. Cybercriminals will inevitably attempt to circumnavigate by looking for ways to exploit this method as nothing remains hackproof, but like with any early adoption of new technology, this is a great start and we are likely to see a decent version of this in the near future.”

ABOVE Microsoft is among the tech giants leading the way to a passwordless future

Threatscape evolution

Verizon’s Data Breach Index Report (pcpro.link/335DBIR) is a big thing in the security world, and for good reason: it brings perspective to cyber reporting. The 100-page 2022 report, marking the 15th year of the things, did not disappoint.

My main takeaway is that ransomware has continued on its upwards trajectory, with the 13% increase “as big as the last five years combined”. I recommend you read it yourself as there’s way too much data to be covered here unless I devote the entire column this month and next to it, which I’m not.

One more key finding is that error remains a constant in the causing of breaches, and misconfigured cloud storage sits atop the pile there. Another is that supply chain breaches are also big, with more than 60% of system intrusion incidents last year involving supply chains.

That’s my short take. Here are some other voices giving their responses in terms of what it all means to business.

Saryu Nayyar is the CEO at Gurucul and says that while most of the DBIR report talks about threatscape evolution, it also confirms that just shoring up defences isn’t enough to prevent a breach. “The research points to the fact that based on human behaviours and poor supply chain visibility, a compromise is all but inevitable, especially if the target of a persistent and organised threat actor,” she said.

“This shifts the investment by CISO/CSOs and security teams to be in products and resources focused on security operations centre (SOC) transformation for monitoring and threat detection of both insider and external threats that are already inside the castle walls.”

This isn’t too much of a stretch, seeing as many existing security information and event management and extended detection and response solutions don’t appear to have done much to stop determined threat actors, truth be told.

So what specifically needs to be done? “In order to achieve a successful SOC transformation, what is required is a more complete set of telemetry, advanced analytics, and trained, not rule-based, machine-learning models that adapt to both the organisation and variations in tools and techniques by threat actor groups,” said Nayyar. While I don’t see AI as a silver bullet, machine learning does have a role to play, especially (or perhaps solely) where automation is concerned. Automating manual tasks, prioritising and optimising resources as well as increasing detection and response speed, along with meaningful context and risk understanding, makes a lot of sense, does it not?

Mark Lamb, CEO of HighGround, isn’t at all surprised by the findings of the 2022 DBIR. “I think most people would agree we are seeing a huge rise in ransomware,” he said, “and that phishing, stolen credentials, misconfigurations and insiders remain the primary cause of breaches.”

As for the important lessons for businesses to learn from this, at the risk of stating the obvious, Lamb said prioritising defences against those attacks is essential. And he’s right, as none of them looks like vanishing over the horizon any time soon. But he also argues, rightly in my opinion, that practising good “cyber hygiene” and employing robust security tools alone are simply not enough.

“One of the biggest challenges that often leaves businesses weakest is they don’t fully understand their actual cybersecurity posture,” he said. “They deploy security tools and carry out training, but they don’t have an easy and accessible way to understand how they are helping reduce their risk, or if weaknesses still exist within their infrastructure that could be exploited maliciously.” Now we’re getting somewhere meaningful: a robust security posture requires an ability to truly understand how security programmes (both in terms of product and people) deal with an incident response but also discover weaknesses that could lead to those incidents in the first place.

These are weaknesses I’ve already touched on, by fortunate coincidence,

ABOVE Ransomware has risen by 13% in the past year, according to Verizon

“Good cyber hygiene and robust security tools alone are simply not enough”

BELOW The Data Breach Index Report is essential reading

Continued from previous page

when talking about the password problem earlier. The report provides “further evidence around the dangers credentials present to organisations,” said Mike Newman, CEO of My1Login. “Not only are they the root cause of most data breaches, but they are also a top target for cybercriminals to steal when carrying out attacks.”

This is crystal clear in its simplicity: if attackers have access credentials, they also have access to monetise any number of threats against the business. Newman suggests better password practices, of course, but warns that these won’t remove the problem completely. “Eliminating potential attack vectors through passwordless security and removing passwords from the hands of users, where they are still required, is a great way to combat this risk,” he added.

I will leave the last word to Ben Jones, CEO of Searchlight Security, who says the report has rightly put emphasis on four key paths that threat actors use to get into networks: credentials, phishing, exploiting vulnerabilities and botnets.

“Defending against these has become especially important as cybercrime has professionalised,” he said, “with cybercriminals selling these access points online for others to exploit.” Indeed, I’ve written plenty about such initial access brokers (IABs), and you can search for my online content about this at my Authory article archive (authory.com/DaveyWinder).

“One way organisations can look to combat the cybercriminals that are selling access to their systems and facilitating attacks is to find them where they operate: on the deep and dark web,” Jones advised. “By monitoring marketplaces and forums for company credentials and vulnerabilities, or those of organisations in their supply chain, businesses can identify when and where they are at risk of attack.”

Certainly, it’s hard to argue with Jones when he says identifying early signs of risk to your business is “more effective at stopping attacks like a ransomware attack than waiting for when criminals have already gained or bought access to your systems”.

davey@happygeek.com

STEVE CASSIDY

“Quite a lot of work went into fibre fit to be buried, tortured, boiled, frozen and stretched”

When it comes to implementing fibre broadband in a town or city, the sky’s the limit – and digging up trenches every five years is the absolute pits

At first, the range of topics on offer for this particular interview seemed downbeat. A leading engineer in a Chinese telecoms firm, itself suffering through some challenging PR issues, would talk about life in the infrastructure sector since the 1950s, and about some possible futures in infrastructure.

I was only too pleased to set up this call, because Michael’s CV showed that he had started work in the telecoms sector, but not for a Chinese firm: Michael joined the Post Office, that being the main player in telephones in those far-off days. In this respect, Michael was similar to my late uncle, without whose early influence I would almost certainly not have had a chance of penetrating the technology business at all.

So I was in a well-behaved frame of mind for this chat, which almost immediately headed in a direction I wasn’t at all expecting. I was going to ask about the job market in fibre-optic based telecoms, assuming that his road from the Post Office to global telecoms names based in China was in the nature of a mobile and rewardsdriven job market: no, not a bit of it, he said. He’d stayed still, while the ownership of the company employing him had moved from the public sector to the private and to overseas domiciled environment.

The business he has just retired from is Huawei; the short version of the history of owners is that this group came out of Post Office ownership, into a kind of stewardship scheme in which the owner was the East of England Development Agency. An exceptionally strange arrangement even for the late 20th century, because the product lifecycle and development of a new optical fibre standard isn’t something one can polish off in a single financial year.

Some method to this madness slowly emerged as we talked about those early years of the World Wide Wait and how the fibre developers envisaged their inventions would be put to use. Some very far-sighted fellow realised that the kind of fibre-to-the-house technology we see these days would need lots of work on the presentation of fibres common at the time – and it’s not all about speed. Quite a lot of work went into fibre fit to be buried, tortured, strung between chimneypots (and that comes from one of my fibre deployment projects!), boiled, frozen, stretched and so on.

Work on more traditional nerdy concerns, such as maximum transmission distance, overall link speed and fault-tolerance was driven

Arthur’s legends

Interested in the history of ocean-crossing communication cables? Seek out Arthur C Clarke’s How the World Was One: Beyond the Global Village, an intimate memoir of great adventures by great technologists going back several centuries. I found my copy second-hand on Amazon many years ago, faintly stamped “Provo public library”. Sorry, Provo.

Steve is a consultant who specialises in networks, cloud, HR and upsetting the corporate apple cart @stardotpro

BELOW Huawei played a big part in revamping old Post Office technology

by the marine cable sector: the assumption was that such things would matter only to gargantuan telecoms companies, whose planning and expectations ran into multiple year plans and budgets. As such, why not have the fibre R&D group run by a quango, at least financially?

The answer to that question emerged when, some time in the late 1990s, Huawei turned up in East Anglia with a clear appetite for buying and revamping many rather forlorn and low-key parts of the former Post Office technological empire. Things such as a motive for profit and a sense of urgent need on the part of undeveloped regions of the globe re-emerged as vital parts of the business.

A steady process of Chinese researchers and staff coming to Ipswich in secondment positions became a regular feature of life. The handling of rebranding and being no longer part of the British government was handled softly-softly, slowlyslowly. Huawei’s interests, driven by a domestic market consisting of a quarter of all humanity, were practical and obvious rather than hidden and Machiavellian.

This gave me a confidence booster on a matter that’s been bugging me for some time: the whole noisy business of which countries “ban Huawei”, with the usual mainstream paranoid nonsense about how the firm could monitor lumps of traffic traversing its equipment or networks. I realise that this is a fixed and formalised piece of reporting, a temptingly easy and complete statement that any general purpose journalist could hand in without much chance of a negative reaction or poor readership stats, but it always seems to me to sit in isolation, with no understanding of the history or the times through which the internet’s spread and accessibility have evolved.

When the politicians and media are telling us about a remote and unfriendly Chinese business being excluded from a contract, we certainly don’t think that this incorporates decades of R&D done in the West, or indeed workers and inventors whose professional history dates back to before the introduction of colour television.

Michael’s final comment kept me busy for quite a while: he said that the development that most interested him right now is that of hollow optical fibres. He used phrases such as “thousands of times faster”, which always get my attention. Thousands of times faster than where we are now? We already have 10Gbit network fibre in pretty widespread use, with a few brands and cities making rash statements about 10Gbit from subscriber to internet backbone.

In a smoke-andmirrors predictive marketplace like this, it becomes vital to retain a sense of the wider system you’re connecting to. Your town might be like Aberdeen, with a good 30-mile gap to the next big city and a very rich and thriving local economy, which attracts a pure fibre access provider, its main slogan being how fast it is – at least, within the parts of the network it can control. The 30-plus miles of fibre you can see dangling from the telegraph poles alongside the road to Aberdeen are, of course, entirely mark 1 hardware, its replacement cost being far too high (until a very much faster connection architecture is invented).

Fiddling about with speed test sites in pursuit of some customer service upgrade isn’t the best route to massive leaps in access performance. For that, you need a step change in the entire hardware kit list, the sort of thing that comes along only once in a couple of decades. I have a funny feeling that Michael’s prophecy about hollow optical fibre is one of those steps.

Opinions differ

However, as the pace of industry announcements about fibre and customer premises increases with the end of quarantine, I have to say that there are other voices on the whole matter of ultimate fibre speeds, and what you have to go through to get them – some of them unexpectedly close to home.

May I introduce you to Mr Lee Grant? One of my irregular press contacts sent me the usual massively upbeat promotional guff about how amazed residents of a certain northern city were by their new all-fibre infrastructure. A moment’s confusion and I realised: this northern city was home to PC Pro’s repair expert. What a great opportunity for a close-up look at how these rollouts look from the trench dug in the pavement, I thought, so I just threw the news release straight up to Lee for his comment and, maybe, a bit of street pictorial to go with the gritty tales of dropping fibre in the hole.

I’m afraid I can’t pass on his comments. Most of them are not reproducible in polite company. While the cable contractor was happy enough with its work, the locals definitely were not. Rules for cable laying and access to housing and premises built during the Industrial Revolution involve making a surprisingly large hole in the ground, and then mostly putting the dirt back down and providing pretty much any old top cover they happen to have hanging around. Complete restitution of the built environment with like for like isn’t in the contract, even though quite a lot of the motivation behind the “dig it up” solution is to keep unsightly wires out of the line of sight of easily offended residents.

It seems to me that there are a lot of potential fixes for this level of disgruntlement, but first we must acknowledge something much more fundamental here: gigabit fibre is looking distinctly middle-of-therange at this point. It wasn’t that long ago that 10Mbits/sec or 100Mbits/sec was being stuck in the same unwelcome trenches all across the country, at least in those places both densely populated and rich enough to warrant direct fibre links.

ABOVE Coming soon to a road near you. Again

“While the cable contractor was happy enough with its work, the locals definitely were not”

BELOW Cable layers have no obligation to repair surfaces on a like-for-like basis

It looks like 10Gbits/sec is an appealing upgrade, if only from a marketing perspective – but here’s the point from my conversation with the Post Office veteran. The next fibre speed jump is “thousands of times” faster, and is at least in sight. How many times will Lee’s driveway get dug up between now and the arrival of some real 2020s spec connections? Couldn’t the regulators stop thinking about telephone wires and burglar alarm auto calls and match the rules on wayleaves and access to the reality of the tech? Then, think about a distribution system that has just one terabit feed to each row of terraced houses, breaking down anything up to a hundred 1Gbit/sec short feeds that run not down the public pavement, but instead along the roofline, requiring no burying or dodgy restorative work afterwards?

This, I admit, is blue-sky thinking. For one thing, I realise that fibre out in the elements is a stretch for most companies and homeowners, and furthermore that having your fibre hung on the façade of your neighbour’s house might require the occasional replacement thanks to house fires, DIY misadventures and so on. But here’s the thing: it’s right there where you can see it, not buried down under the corpses of your finest rhubarb.

This makes replacement startlingly easy, even if it might be a long time coming: my fibres ringing Gray’s Inn are still visible today, some 15 years after the project that put them in. Although, to be fair, I don’t know if they’re still carrying any data. The main thing is that they have outlasted their heyday. Nobody had to do a thing with them. We put those wires in the roof because the normal route through basements was ruled out, because one basement on the route had been the workplace of Charles Dickens and wasn’t going to be dug up or defaced by any uppity nerds.

Making Lee’s neighbours happier is a fraught undertaking, because there’s a double-whammy lurking here. Fibre is thought of as a long-term investment: people’s imaginations have been fed innumerable images of cable-laying ships and great groaning steel machines, none of them especially cheap. Yet the developing track record here is that a fibre standard gets superseded in about half a decade, and is obsolete about another decade or so later.

This timescale is enough to thoroughly confuse regulators and lawmakers. They want to see the fibre being provided using legal rules that quite literally have not been updated since the time of the horse and cart. Lots of the angst generated by the Huddersfield rollout was caused by sticking to rules that were written before man had invented the aeroplane.

Clearly this is a situation that needs an update from the regulators. But not the internet regulators – the building regulators. Running a 5mm-diameter jacketed fibre along the guttering doesn’t need the same heavy defensive penalty regulations as it does when the cable being regulated is carrying mains power. Getting mad at hard-working fibre installation blokes and their drills and augurs is missing the point: they’re just doing what the rules require them to do.

From Russia, with love

Everybody has their favourite collection of utilities. Not all jobs require much tweaking and fiddling with a well-configured PC these days, but it pays to recall that there are some special cases: virtualisation is one, even on a simple setup like a laptop, just for carrying the ghosts of all your previous versions. You want a spread of information that tells you everything about your machine and its storage of your VMs.

Which is why I like Parallels Toolbox. This is a sizeable collection of little apps to do stuff, a few of which help with virtual machine management. There’s only one small problem: I was first introduced to Parallels as a company over 15 years ago, when it was somewhat closer to Acronis than it is now.

It’s a funny relationship. The two firms have a revolving door for executives and even the occasional CEO, despite having diverged quite a lot over the past few years. I was told all about the skyscraper they have in Novosibirsk, with 400 developers in there doubtless grateful for the warmth. Novosibirsk, you see, is in Siberia.

What does that actually mean, though, in the context of a decent utility package being offered for a few quid? Clicking through as if I were a buyer, I’m presented with a UK-specific shopping cart/checkout page. Most people wouldn’t realise that this is a Russian business – it’s not a Russian name like Kaspersky, and all of the purchase transaction is denominated in pounds – but, in theory at least, buying from this site is currently breaking Russian sanctions.

That’s a nitpicker’s summation, I know. Nobody has thought for a moment about the implementation of a shopping cart on behalf of a corporation that pays tax in a country you are either at war with, or about to be. There’s no international agreement on where money must “rest” in order to avoid seizure, nor yet is there a clear set of tests you can apply to reassure

yourself that you (and they) are in the clear. For me, the real trouble isn’t with whoever is at the far end of my web purchase. The most bother comes Parallels is great, but are from the grand conspiracy you violating sanctions if theorists looking over my you buy it? shoulders back here at home. If you buy that, they say, you’re leaving an open door to your entire hard disk, ready for the Russian Security Service to come and look over everything you’ve got. There’s a war on, they say, you have to be careful! This whole outlook is a pastiche of the Churchillian era, when most government pronouncements on information security appeared on the sides of buses as posters. A bit like a more colourful Twitter, for all you millennials out there. These days we have thousands of pages of advice on keeping information from prying eyes, but most of that was written well before hostilities and the whole sanctions issue even arose. The problem we have now in 2022, with the whole concept of intrusive behaviour by aggressor nation states, is nothing new and certainly nothing special to a state of wartime. What you were doing before the tanks squeaked over the border – in terms of regular scans for newly infected files, or unexpected bursts of 100% CPU usage when you didn’t ask anything of the machines, or peculiar network traffic – hasn’t magically stopped working since the war turned hot. Nor, to be even more sensible about it, has anything changed in the files downloaded from e-commerce websites or download libraries. No sudden bloating of the Zip file, or change in checksums, has been found as an authoritative bit of evidence for a “cyberwar campaign” getting ready to roll. Quite a lot of the hype is just that – and that’s why I’m mentioning Parallels, whose employees, in all the years I’ve been talking to them, have been polite, delightful and intelligent.

BELOW Laying cables in the sea is an expensive business

This article is from: