F5 Networks TMOS Administration Study Guide

Page 1


Disclaimers This book is in no way affiliated, associated, authorized, endorsed by F5 Networks, Inc. or any of its subsidiaries or its affiliates. The official F5 Networks web site is available at www.f5.com. F5, Traffix, Signaling Delivery Controller, and SDC are trademarks or service marks of F5 Networks, Inc., in the U.S. and other countries. A full list of F5 Networks’ marks can be found at https://f5.com/about-us/policies/trademarks. Trademarks used with permission of F5 Networks, Inc. This book refers to various F5 marks. The use in this book of F5 trademarks and images is strictly for editorial purposes, and no commercial claim to their use, or suggestion of sponsorship or endorsement, is made by the authors or publisher.

Permission Notice The F5 Certified logo used on the front cover of this book is a registered trademark of and is copyright F5 Networks, Inc. F5 Networks, Inc has granted this book’s authors permission to use the logo in this manner.

www.f5books.eu Copyright © 2018 by F5 Books - Philip Jönsson & Steven Iveson All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the authors except for the use of brief quotations in a book review or scholarly journal. First Printing: 2018 ISBN: ISSU_v2 Revision: 2018.v2

2 2


TABLE OF CONTENT Preface

34

About the Authors

34

Dedications

34

Acknowledgements

35

Feedback

36

1. Introduction

37

Who is This Book for?

37

How This Book is Organised

37

F5 Networks the Company

39

F5 Terminology

43

What is BIG-IP?

43

BIG-IP Hardware

43

BIG-IP Software – TMOS

44

TMOS Components in Detail

46

TMOS Planes

48

BIG-IP Hardware Platforms

48

Appliances

49

VIPRION

56

Herculon

59

BIG-IP Virtual Edition (VE)

59

The Different F5 Modules, Products & Services

60

3 3

Overview

60

Access Policy Manager (APM) Module

61

Advanced Firewall Manager (AFM) Module

62

Application Acceleration Manager (AAM) Core Module

62

Application Acceleration Manager (AAM) Full Module

63

Application Security Manager (ASM) Module

63


Application Visibility and Reporting (AVR)

64

BIG-IQ Centralised Management Product

65

BIG-IQ Cloud & Orchestration Product

66

Carrier Grade NAT (CGNAT) Module

66

Edge Gateway Product

67

Enterprise Manager (EM) Product

67

DNS (formerly Global Traffic Manager (GTM)) Module

67

IP Intelligence Service

68

Link Controller Product (& Module)

69

MobileSafe Product & Service

69

Policy Enforcement Manager (PEM) Module

69

Secure Web Gateway (SWG) Module & Websense Cloud-based Service

70

Silverline Cloud-based Service

70

WebSafe Service & Module

70

DDoS Hybrid Defender (Herculon)

71

SSL Orchestrator (Herculon)

71

Free and/or Open Source Products

71

Bigsuds

72

iControl REST Software Development Kit (F5-SDK)

72

Ansible

72

Containers

72

OpenStack

72

Cloud - AWS

73

Cloud - Azure

73

Cloud - GCP

73

The Full Application Proxy

73

The Packet Based FastL4 Proxy

75

4 4


OneConnect

76

2. The TMOS Administrator Exam

78

The F5 Professional Certification Program

78

Why Become Certified?

79

Choosing a Certification

80

Getting Started

80

Taking Exams

81

Additional Resources

81

Practice Exams

81

Additional Study Material

81

AskF5

81

DevCentral

82

F5 University

82

Exam Blueprints

82

BIG-IP LTM Virtual Edition (VE) Trial

82

BIG-IP VE Lab Edition

82

BIG-IP VE on Amazon Web Services (AWS)

83

Other Clouds

83

3. Building Your Own Lab Environment

84

Obtaining the Different Components to Build Your Lab

84

VMware Workstation Playerâ„¢

84

BIG-IP VE Trial Evaluation Key

85

Downloading the BIG-IP VE Machine

85

BIG-IP VE Lab Edition

85

The Lab Architecture

85

Lab Exercises: Setting up Your Lab Environment

87

4. Introduction to LTM - Initial Access and Installation

5 5

107


The BIG-IP LTM Module

107

Initial Setup

107

Configuring the Management Port IP Address

107

Configuration via the LCD Panel

108

Configuring the Management IP address Using the Touch LCD Panel (iSeries platforms)

108

Configuration Using the Config Command

109

Configuration Using TMSH

110

Configuration Using the WebGUI

111

Licensing the BIG-IP System

111

Automatic License Activation

112

Manual License Activation

113

Provisioning

114

The Setup Utility

115

Self-IP Addresses

115

Lab Exercises: Initial Access and Installation

116

Chapter Summary

126

Chapter Review

127

Chapter Review: Answers

128

5. Local Traffic Objects

130

Nodes

130

Pool Members

130

Pools

130

Virtual Servers

131

Wildcard Virtual Servers

132

Local Traffic Objects Dependencies

133

The Different Types of Virtual Servers

136

Standard Virtual Server

137

6 6


Connection Setup with a Standard Virtual Server Using Only a Layer 4 Profile

137

Connection Setup with a Standard Virtual Server Using a Layer 7 Profile

138

Performance Layer 4 Virtual Server Connection Setup with a Performance Layer 4 Virtual Server Performance HTTP Virtual Server

139 139 141

The Fast HTTP Profile

141

Connection Setup with a Performance HTTP Virtual Server

142

Performance HTTP Virtual Server With an Existing Idle Server-Side Connection

142

Forwarding IP Virtual Server Connection Setup with a Forwarding IP Virtual Server Forwarding Layer 2 Virtual Server Connection Setup with a Forwarding Layer 2 Virtual Server

145 145 146 146

Reject Virtual Server

148

DHCP Relay Virtual Server

148

Stateless Virtual Server

149

Internal Virtual Server

149

Message Routing Virtual Server

151

Chapter Summary

151

Chapter Review

152

Chapter Review: Answers

154

6. Load Balancing Methods

156

Member vs. Node

156

Static Load-Balancing

157

Round Robin

158

Ratio

158

Dynamic Load-Balancing

161

Least Connections

7 7

161


Fastest

162

Least Sessions

163

Ratio Sessions

163

Ratio Least Connections

165

Weighted Least Connections

168

Observed

169

Predictive

169

Dynamic Ratio

169

Priority Group Activation

170

FallBack Host

175

Lab Exercises: Load Balancing

175

Chapter Summary

186

Chapter Review

187

Chapter Review: Answers

190

7. Monitors

192

Overview

192

Health Monitors

193

Performance Monitors

193

Intervals & Timeouts

193

Temporarily Failed Monitors

196

Where Can You Apply Health Monitors?

196

Monitoring Methods

200

Simple Monitoring

200

Active Monitoring

200

Passive Monitoring

200

Benefits and Drawbacks With Passive and Active Monitoring

202

Active Monitoring

8 8

202


Passive Monitoring Types of Monitors

203

Application Check Monitors

204

Content Check Monitors

205

Performance Check Monitors

206

Path Check Monitors

206

Service Check Monitors

208 209

Slow Ramp Time

209

Multiple Monitors & the Availability Requirement

209

Manual Resume

210

Monitor Reverse Option

210

Monitor Instances

210

Administrative Partitions

210

Firewalls

210

Testing

211

Monitors - Logging

211

Enable Monitor Logging on Node Level

211

Enable Monitor Logging on Pool Member Level

212

Enabling Monitor Logging for SNMP DCA/DCA Base

212

Disabling Monitor Logging for SNMP DCA/DCA Base

212

Object Status

9

202

Address Check Monitor

Monitors - Advanced Options

9

202

213

The Different Object Status Icons

213

Object State

214

Understanding Object Status Hierarchy

215

When Will the BIG-IP System Send Traffic to a Node/Pool Member?

220


Local Traffic Summary

220

Local Traffic Network Map

221

Filtering Results

221

Verifying Object Status

222

Using the CLI (tmsh) to Verify Object Status

223

Monitor Status Logging

223

Enabling Monitor Status Logging

223

Disabling Monitor Status Logging

224

Monitor Status Changes in the BIG-IP LTM Log

225

Lab Exercises: Monitors

225

Chapter Summary

232

Chapter Review

233

Chapter Review: Answers

235

8. Profiles

237

Why Use Them?

237

Profile Types

237

Protocol Profiles

238

Persistence Profiles

238

SSL Profiles

238

Application (Services) Profiles

238

Remote Server Authentication Profiles

239

Analytics Profile

239

Other Profiles

239

Profile Dependencies

239

Default and Custom Profiles

242

10 10

Creating a Custom Profile

244

Deleting a Custom Profile

244


Assigning Profiles to a Virtual Server

244

Lab Exercises: Profiles

246

Chapter Summary

253

Chapter Review

254

Chapter Review: Answers

256

9. Persistence Concept of Stateless and Stateful Applications

257 257

Sessions

257

Stateful Communication With Load Balancing

257

What is Persistence?

258

Persistence Methods

258

Source Address (aka Simple) Persistence

258

Cookie Persistence

262

Destination Address Persistence

269

Hash Persistence

269

Universal Persistence

270

Other Persistence Profiles

270

Single Node Persistence Configuration Verification

270 275

Primary & Fallback Methods

275

Match Across

276

Match Across Services

276

Match Across Virtual Servers

278

Match Across Pools

278

Persistence Mirroring

279

Lab Exercises: Persistence

279

Chapter Summary

287

11 11


Chapter Review

287

Chapter Review: Answers

290

10. SSL Traffic

293

Terminology of SSL

294

Certificate Authority (CA)

294

Certificate Signing Request (CSR)

294

Personal Information Exchange Syntax #12 (PKCS#12)

295

Managing SSL Certificates for the BIG-IP System Using the WebGUI

295

Procedures

296

Creating a Self-Signed SSL Certificate

296

Creating a Certificate using a CSR

296

Importing an SSL Certificate

298

Importing an SSL Private Key

298

Importing a PKCS#12 File

299

Renewing a SSL Certificate Using a CSR

299

SSL/TLS Offloading

300

The Client SSL Profile

302

Creating a Custom Client SSL Profile

303

SSL Bridging Creating a Custom Server SSL Profile

303 304

SSL Passthrough

305

Certificate Authorities

306

Intermediate CAs and the Certificate Chain

307

Importing Certificates & Constructing the Certificate Chain in the BIG-IP System

308

Importing the CA Certificates

308

Creating the Client SSL Profile With a Certificate Chain

310

Lab Exercises: SSL Traffic

12 12

310


Chapter Summary

316

Chapter Review

317

Chapter Review: Answers

318

11. NAT and SNAT

320

Network Address Translation – NAT

320

Traffic Flow When Using a Virtual Server on Inbound Connections

322

Traffic Flow When Using NAT on Inbound Connections

323

Traffic Flow When Using NAT on Outbound Connections

324

Disadvantages of Using NAT

325

NAT Traffic Statistics

326

Source Network Address Translation – SNAT

13 13

327

Why We Need SNAT

327

Typical Uses of SNAT

329

Pool Member’s Default Gateway is Not the BIG-IP system

329

Both Client and Pool Member Reside on the Same Network

333

Internal Nodes in a Private Subnet Need to Share One External IP Address

336

How to Configure SNATs

337

SNAT Listener

337

SNAT Translation List

337

SNAT With a Virtual Server

338

SNAT Pool

338

SNAT Auto Map

340

How to Enable SNAT Auto Map on a Virtual Server

342

Potential Issues for Server Applications When SNAT Translation is Used

342

Port Exhaustion

342

How to Change the Source Port Preservation for Virtual Servers

343

Socket Pairs

344


Port Exhaustion on a Virtual Server

344

Monitoring Port Exhaustion

345

Lab Exercises: NAT and SNAT

345

Chapter Summary

348

Chapter Review

349

Chapter Review: Answers

350

12. High Availability

352

Configuring a Sync-Failover Pair

353

Device Trust

353

The Different Types of Trust Authorities

353

The Importance of the BIG-IP Device Certificates

354

Device Identity

355

The Device Discovery Process in a Local Trust Domain

355

Important When Configuring a Device Trust

355

Adding a Device to a Local Trust Domain

356

Resetting the Device Trust

356

Device Groups

356

Sync-Only Device Group

357

Sync-Failover Device Group

357

Administrative Folders

357

Floating Self-IP Addresses

357

MAC Masquerading

358

Synchronising the Configuration

358

The CMI Communication Channel in Detail

359

ConfigSync Operation in Detail

360

Determine the State of a System

361

Force to Standby Mode

361

14 14


WebGUI – Method 1

362

WebGUI – Method 2

362

WebGUI – Method 3

362

CLI - tmsh

362

Traffic Groups

362

The Default Traffic Groups on a BIG-IP System

363

Traffic Group Failover Methods

364

Load Aware Failover

364

How to Specify the HA Capacity

365

How to Specify the HA Load Factor

366

Calculation Example

367

HA Order

369

HA Groups

370

Auto-Failback Auto-Failback Feature is Not Compatible With HA Group

371 372

Force to Standby Feature is Not Compatible with HA Group

372

Active-Active Redundancy

372

Failover Options

378

HA Table VLAN Failsafe

378 379

Using the High-Availability Screen

381

Using the VLANs Screen

381

Gateway Failsafe

381

Failover Detection

381

Device Group Communication

381

15 15

Hardware Failover

381

Network Failover

382


Network Communication

382

Stateful Failover

382

Connection Mirroring

383

Persistence Mirroring

383

SNAT Mirroring

383

Considerations Regarding Stateful Failover

384

How to Configure Stateful Failover

384

Specifying an IP Address for Connection Mirroring

384

Enabling Connection Mirroring on a Virtual Server

385

Enabling Connection Mirroring for SNAT Connections

385

Enabling Mirroring of Persistence Records

385

Lab Exercises: High Availability

385

Chapter Summary

406

Chapter Review

407

Chapter Review: Answers

409

13. The Traffic Management Shell (tmsh)

412

Accessing the Traffic Management Shell (tmsh)

412

Understanding the Hierarchical Structure of tmsh

414

The tmsh Prompt

415

Navigating the tmsh Hierarchy

415

Command Completion Feature

416

Perform Wildcard Searches in tmsh

417

Context-Sensitive Help

417

Manual Pages

418

Command History Feature

419

The tmsh Keyboard Map Feature

419

Managing BIG-IP Configuration State and Files

420

16 16


Introduction to BIG-IP Configuration Files and Structure

421

Text Configuration Files

423

Binary Configuration Files

424

Loading and Saving the System Configuration

425

Administrative Partitions

426

How Do Administrative Partitions Work?

427

Referencing Object in Different Partitions

428

Limitations With Administrative Partitions

429

Navigating Between Partition

429

How to Create Administrative Partitions

430

Effect of Load/Save on Administrative Partitions

430

User Roles

430

Creating Local User Accounts

432

Modifying the Properties of a Local User Account

433

Shutting Down and Restarting the BIG-IP System

434

Using Advanced Shell (bash)

434

Viewing the BIG-IP Connection Table in tmsh

435

About the Connection Table

435

Connection Reaping

435

Viewing the Connection Table

435

Filtering Using awk and grep

437

Additional Help Tmsh on DevCentral

437 437

Lab Exercises: tmsh

438

Chapter Summary

442

Chapter Review

443

Chapter Review: Answers

445

17 17


14. File Transfer

447

Linux Client - Sending Files - SCP

447

Linux Client - Retrieving Files - SCP

448

Common SCP Errors

448

Linux Client - Connecting - SFTP

448

Linux Client - Sending Files - SFTP

449

Linux Client - Retrieving Files - SFTP

449

Key Based Authentication

450

Windows Clients

452

15. Selected Topics Always On Management (AOM)

453 453

Accessing AOM Through the Serial Console

453

Accessing AOM Through the HMS Via SSH

453

Directly Connecting to the AOM Via SSH

454

The Command Menu

455

iRules

455

When Should You Use an iRule?

456

When Should You Not Use an iRule?

456

iRule Components

456

Event Declarations

457

Operators

457

Rule Commands

458

iRule Events

458

HTTP Events Data Groups Lists

18 18

459 460

What Are the Benefits of a Data Group?

461

How Do I Use Data Group Lists?

461


Creating Your iRule

462

The iRule Editor

462

Learn more

463

iRule Wiki

463

CodeShare

463

Additional Literature

463

iApps

463

iApps Framework

464

Templates

464

Application Services

464

Strict Updates Disabling Strict Updates What is a Route Domain?

465 465 466

Benefits of Using Route Domains

466

Route Domain IDs

467

Parent ID

467

About VLANs and Tunnels for a Route Domain

468

About Default Route Domains for Administrative Partitions

468

Creating a Route Domain

469

Lab Exercises: iRules

470

Chapter Summary

475

Chapter Review

476

Chapter Review: Answers

478

16. Troubleshooting Hardware

479

Introduction

479

End User Diagnostics (EUD)

479

Obtaining the Latest EUD Software

19 19

479


Installing EUD on the BIG-IP Device

480

Creating an EUD Bootable CD-ROM

480

Creating an EUD Bootable USB Storage Device

480

Launching EUD

480

Running Tests

481

Viewing Output

482

LCD Warning Messages

482

LED Indicators

483

The Power LED Indicator

483

The Status LED Indicator

483

The Activity LED Indicator

483

The Alarm LED Indicator

484

Modifying alert.conf

484

Backing up the Original alert.conf

485

Clearing Alerts

486

Clearing the LCD Warnings and Alarm LED Remotely (Using the CLI)

486

Clearing the LCD Panel

486

Clearing the Alarm LED

487

Log Files

488 Priorities

490

Facilities

490

Perform a Failover

492

Consequences of Performing a Failover

494

How to Perform a Failover

495

WebGUI

495

CLI - tmsh

495

Troubleshooting System Interfaces

20 20

495


The Network Components Hierarchy

495

The System Interfaces

497

Link Layer Discovery Protocol (LLDP)

497

The Interface Properties

498

The Interface Naming Convention

498

Viewing Interface Information

499

Interface State

499

Flow Control

500

VLANs

500 Assigning Interfaces to VLANs

500

Port-based Access Method

501

Tag-based Access Method

501

Creating and Managing VLANs

502

VLAN Groups

502

Transparency Mode

503

Bridge All Traffic

503

Bridge in Standby

503

Creating a VLAN Group

503

Associating a VLAN/VLAN Group With a Self-IP address Creating a Self-IP address Trunks

21

504 504

How Trunks Work

505

Link Aggregation Control Protocol (LACP)

505

Creating a Trunk

506

Troubleshooting Network Issues

21

504

506

Network Statistics

506

Troubleshooting Packet Drops

507


Troubleshooting Interface Packet Drops

507

Troubleshooting TMM Packet Drops

509

Known Issues

511

Chapter Summary

511

Chapter Review

512

Chapter Review: Answers

514

17. Troubleshooting Device Management Connectivity

515

Get to Know Your Environment

515

Verify the Configuration

516

Tools Available for Troubleshooting

516

Ping

516

Traceroute

517

Telnet

517

cURL

518

Verifying the Processes on the BIG-IP device

519

Verifying That the sshd Process is Running Using the WebGUI

519

Verifying That the Web Processes is Running Using SSH

519

Port Lockdown

521

Port Lockdown Exceptions

523

Configuring Port Lockdown

524

Restricting Access to the Management Port

525

Packet Filters

526

Exemptions

529

Creating Packet Filter Rules

530

Reordering Packet Filter Rules

530

Logging of Packet Filter Rules

531

Troubleshooting DNS Settings

22 22

531


Verify the DNS Configuration

531

Tools Available for Troubleshooting DNS

532

nslookup

532 Common Error Messages

dig

533 534

Changing Resource Types

536

Limiting the Output

536

Perform Reverse Lookups

537

Query Another DNS Server

537

Performing Multiple Lookups

537

dig Parameters

537

Remote Authentication Introduction

538

The LDAP Authentication Module

538

The RADIUS Authentication Module

538

The TACACS+ Authentication Module

539

The SSL Client Certificate LDAP Authentication Module

539

The SSL OCSP Authentication Module

539

The CRLDP Authentication Module

540

The Kerberos Delegation Authentication Module

540

The Network Time Protocol (NTP) Configuring an NTP Server Troubleshooting NTP

540 541 541

Verifying the NTP daemon service

541

Verifying the Communication Between the BIG-IP System and the NTP Peer Server

541

Verifying the Network Connectivity to the NTP Peer Server

543

Chapter Summary

543

Chapter Review

543

23 23


Chapter Review: Answers

18. Troubleshooting and Managing Local Traffic Traffic Processing Order

545

547 547

Control Plane Functions

547

Packet Processing Order

547

Listener Processing Order

548

Managing & Troubleshooting Virtual Servers & Pools

550

Managing Virtual Servers

550

What Protocols Does the Application Use?

550

On What VLAN Will the Client Access the Application?

551

How Should the BIG-IP System Handle SSL Connections?

553

SSL Cipher Suites

553

SSL Cipher Mismatch

554

Managing Pool Members Monitoring

555 555

Troubleshooting Virtual Servers

556

DNS record

556

Is the Traffic Reaching the BIG-IP System?

556

Check the Status of the Virtual Server

557

What Error Are You Getting When Accessing the Virtual Server?

557

Troubleshooting Pool Members Impact When Modifying the Configuration

559

Changes Not Taking Effect Immediately

559

Taking a Pool Member/Node Offline

24 24

558

559

Disabled

559

Forced Offline

560

Deleting Existing Connections to a Pool Member

561


Deleting Existing Connections to a Node

561

RST Logging

561

Persistence Issues

562

OneConnect

562

Pool Member Failure

562

Troubleshooting Persistence Issues

562

Chapter Summary

566

Chapter Review

566

Chapter Review: Answers

568

19. Troubleshooting Performance Packet Captures

25

570

Why Should We Capture Packets?

570

When Should We Capture Packets?

570

Where Should We Capture?

570

What Are We Looking For?

571

Expected TCP/IP Behaviours

573

Using tcpdump

25

570

573

Limitations

573

Usage Syntax

574

Specifying an Interface

574

Capturing Additional TMM Information

575

Default Output

575

Writing to a File

576

Restricting the Number of Packets Captured

576

Quick Mode

577

Verbose Mode

577

Capturing Link Level (Layer 2 – Data Link) Headers

577


Capturing Packet Contents – Format

578

Capturing Packet Contents – How Much?

579

Disabling DNS Lookups

579

Also Disabling Service Name Lookups

579

Reading from a File

580

tcpdump Expressions

580

Logical Operators

581

Grouping

582

Single Host

583

Multiple Hosts

583

Single Network

585

Multiple Networks

585

Specific Protocol Port(s) & Direction

587

Address Resolution Protocol (ARP)

588

ICMP

589

Refining That First Example Further

589

A Common Example

590

tcpdump Output

591

Generic TCP

591

Generic UDP

593

Notes on the Protocol Field

594

Notes on Service Ports

595

Protocol Formatting

595

Fragmented Packets

595

Using Wireshark

26 26

596

Opening Capture Files

599

Getting Around

600


The F5 Wireshark Plugin

601

Decodes & Non-standard Ports

605

Display Filters

606

Red Herrings

609

Further Reading

609

Other BIG-IP Tools

609

Monitors

609

The Performance Dashboard

609

Performance Statistics in the GUI

610

Performance Statistics at the CLI

612

AVR

613

iHealth

614

SNMP

614

Chapter Summary

614

Chapter Review

615

Chapter Review: Answers

616

20. Opening a Support Case with F5 Information Required When Opening a Support Case With F5

27 27

618 618

Full Description of the Issue

618

Severity Levels

619

QKview

620

Generating a QKview file

621

Generating a QKview on a High Load BIG-IP System

621

iHealth

622

Log Files

624

Packet Traces (tcpdump)

624

SSL Dump

624


UCS Archives

624

Core Files

625

Assembling an Accurate Problem Description

625

Quantitative Vs. Qualitative Observations

625

Relevant Vs. Irrelevant Information

626

How to Open a Support Case with F5 Support

626

Escalation Methods

628

Chapter Summary

629

Chapter Review

630

Chapter Review: Answers

632

21. Identify and Report Current Device Status

634

The Dashboard

634

Interpreting Log Files

636

Health Monitor Failure

636

High Availability Communication Failure

638

VLAN Failsafe

641

Configuration Sync

641

TMM Core Dump

642

Analytics

643

Analytics Profiles

644

How to Configure Analytics to Collect Data

645

Reviewing and Examining the Application Statistics

646

Investigating Server Latency

648

Investigating Page Load Times

649

Capturing Traffic using Analytics

649

Reviewing Captured Traffic

650

Chapter Summary

652

28 28


Chapter Review

653

Chapter Review: Answers

654

22. Device Maintenance

656

Archive Files

656

The Single Config File (SCF)

656

Example of Data Contained in a SCF file The User Configuration Set (UCS) Archive

657 657

Generating a UCS Archive - WebGUI

658

Loading a UCS Archive – WebGUI

658

Generating a UCS Archive – tmsh

659

Loading a UCS Archive – tmsh

659

Customising What Files Are Included in the UCS Archive

659

The Differences Between UCS and SCF

660

Restoring a BIG-IP System From a UCS Archive

661

Licensing Considerations When Restoring From a UCS Archive

661

Other Considerations When Restoring From a UCS Archive

661

Preventing Synchronisation When Installing a UCS Archive on a BIG-IP DNS (GTM) system

662

Delayed Load on BIG-IP ASM Module

663

vCMP Considerations When Restoring From a UCS Archive

663

Preventing Service Interruptions When Replacing a BIG-IP System in a Redundant Pair

663

Managing Software Images and Upgrades

664

Legacy Version Numbering Schema

664

Major Software Versions

664

Minor Software Versions

664

Maintenance Software Versions

664

Cumulative Hotfixes

664

The Tick Tock Release Cycle

665

29 29


Release Notes

666

Overview of the Disk Management Process

667

The BIG-IP Hard Disk and Boot Locations

667

Software Images

668

How to Install a New Software Image

669

Determine the Software Image to Install

670

Downloading the Software Images/Hotfixes

670

How to Import the Software Images/Hotfixes to the BIG-IP system.

671

Checking the MD5 Checksum of an Image File

671

Re-activate the License Prior to the Upgrade

672

Installing the Software Image

673

Installation Using the WebGUI

673

Installation Using tmsh

674

When Installing a Software Image

674

When Installing a Hotfix

675

Booting the BIG-IP System Into the New Volume

675

Rolling Back to a Previous Version

676

Handling the Configuration Between Volumes

676

Best Practices When Upgrading a BIG-IP System in a HA-pair

678

Potential Problems When Upgrading Your BIG-IP system

679

Enterprise Manager (EM)

681

Performing Basic Device Management

681

Adding Devices to Enterprise Manager

681

The Discovery Process

681

Discovering BIG-IP devices

682

Discovering non-BIG-IP Devices.

683

Performing Basic Tasks on Managed Devices

30 30

684


Verifying and Testing Device Communication

684

Verifying the Enterprise Manager IP Address on a Device

684

Verifying Device Connection to Enterprise Manager

686

Rebooting Managed Devices

686

To Reboot a Device Into a Different Boot Location

686

Managing Licenses Starting a Device Licensing Task

687

Accepting the EULA for Devices

688

Configuring Task Options and Running the Task

688

Collecting Information for F5 Support Starting a Support Information Gathering Task Managing UCS Archives

31

688 689 690

Maintaining Rotating UCS Archives

690

Increasing the Maximum Rotating Archives

690

Changing the Default Archive Options

690

Creating Rotating Archive Schedules

691

Modifying Rotating UCS Archive Schedules

692

Maintaining Specific Configuration Archives

692

Creating a New Pinned Archive

693

Pin an Already Existing Archive

693

Restoring UCS Archives for Managed Devices

693

Performing a UCS Restoration for a Managed Device

693

Deleting UCS archives

694

Comparing Multiple Versions of UCS Archives

694

Creating an Archive Comparison Task

694

Searching for Specific Configuration Elements

695

Managing Software Images

31

687

695


Reviewing Available Software Downloads

696

Adding and Removing Software Images/Hotfixes on the Enterprise Manager

696

Adding an Image/Hotfix to the Software Repository

696

Removing an Image/Hotfix to the Software Repository

696

Copying and Installing Software to Managed Devices

697

Copying Software to Be Installed at a Later Date

697

Installing a Software Image

698

Monitoring and Alerts

699

Managing the Task List

700

Overview of Alerts

700

Setting Alert Default Options

701

Creating Alerts for Enterprise Manager

702

Creating, Modifying, and Deleting Alerts for Devices

702

Creating a Device Alert

702

Modifying a Device Alert

703

Deleting a Device Alert

704

Monitoring Certificates

704

Disabling Certificate Monitoring

704

Enabling Certificate Monitoring

704

Viewing Certificate Information

705

Accessing the Certificate Screen

705

The Certificate Status Flag

706

Creating a Device Certificate Alert

706

BIG-IQ

706

The BIG-IQ Panels

707

The BIG-IQ Device/System Management Panels

707

The BIG-IQ Application Delivery Controller (ADC) Panel

708

32 32


The BIG-IQ Web Application Security Panel

708

The BIG-IQ Network Security Panel

708

The BIG-IQ Access Panel

708

BIG-IQ Device and System Management

709

Installing Required BIG-IQ System Components – Updating the REST Framework

709

Device Discovery

710

License Management

711

BIG-IP System Software Upgrades

712

Uploading Software Images

712

Performing a Managed Device Install

712

Rebooting Managed Devices

714

UCS File Backup and Restoration

714

Creating an Instant Backup

714

Creating Scheduled Backups

715

Restoring a UCS File Backup

716

Monitoring and Alerts

717

Configuring BIG-IQ to Work With SNMP

717

Configuring SNMP Agent for Sending Alerts

718

Configuring SNMP Access for Version 1 and 2C

718

Configuring SNMP Access for Version 3

718

Configuring SNMP Traps

719

SSL Certificate Monitoring

719

Chapter Summary

720

Chapter Review

721

Chapter Review: Answers

722

Index

33 33

724


Preface About the Authors Philip Philip Jönsson was born in Malmö City, Sweden 1988 where he still lives with his family. He gained an interest in technology at an early age. When he was eight years old the family got a home PC, which was the first step in his career. Since Philip had a big interest in technology, choosing his education was easy. His IT studies started at The Nordic Technical Institute (NTI) where he studied the basics of computer technology and eventually focused on network. Later on he studied IT-security at Academedia Masters. Philip’s first job in the IT business was at a home electronics company in Sweden. He worked at the IT department and was responsible for managing and troubleshooting the sales equipment in the stores and managing the IT infrastructure within the organisation. This is where Philip first encountered a BIG-IP controller. Philip eventually started working in a Technical Assistance Center (TAC) department at an IT security company. Now Philip works as a consultant focused on F5 products in a department at one of the largest IT security company in Europe and handles major projects and solves problems for Sweden's most well-known companies. Steve Steven Iveson, the last of four children of the seventies, was born in London and was never too far from a shooting, bombing or riot. He’s now grateful to live in a small town in East Yorkshire in the north east of England with his wife Sam and their four children. He first encountered a BIG-IP Controller in 2004 and has been working with TMOS and LTM since 2005. Steve’s iRules have been featured in four DevCentral articles and he’s made over 3000 posts on the DevCentral forums. He's been awarded F5 DevCentral MVP status four times in 2014, 2016, 2017 and 2018. Steve’s worked in the IT industry for over twenty years in a variety of roles, predominantly in data centre environments. In the last few years he’s widened his skill set to embrace DevOps, Linux, Docker, automation, orchestration and more. He also blogs on subjects including Linux, programming, application delivery and careers at packetpushers; a community of bloggers that contribute technical, work life, and opinion articles from the customer’s perspective.

Dedications Philip I would like to dedicate this book to my wife Helena and my family for their support throughout the writing of this book. Thank you for your patience throughout the making of this book! Steve For Mark. You made it.

34 34


Acknowledgements We would like to thank everyone who participated in the beta program for this book. The great feedback has helped us make this the best book possible. Special thanks to these outstanding contributors (in no particular order): ▪ ▪ ▪

Scott Campbell, Canada Hannes Rapp, Portugal Thomas Domingo Dahlmann, Denmark

Philip First off, I would like to thank Holger Yström for promoting my first book. With his help, the first and original study guide was acknowledged by many F5 representatives and made it all the way to the corporate headquarters in Seattle. Without his help the original Study Guide would not have become this big. A big thanks to my mentor, colleague and great friend Thomas Domingo Dahlmann who has been an invaluable asset throughout the making of this book. Thomas has assisted with proof reading our material and providing swift and great feedback, solely on his spare time. Both me and Steven are forever grateful! During the beta program for this book, I came in contact with Scott Campbell whom I also want to thank. The work you put into the proof reading is just astonishing and seeing that kind of enthusiasm is truly inspiring. You have really helped us with raising the quality of this book and we are truly grateful for that. I would also like to thank my employer SecureLink for giving me the opportunity to widen my knowledge and experience of F5 products. Thanks to my department for the encouragement and support throughout the writing of this book. Thanks to the Designerz who created the cover and the design of the book, you did a great job! Thanks to F5 for making this possible and for all the help we’ve got in making this book. An honourable mention is Kenneth Salchow, Julio Hevia Posada and James Dean. You have all been great to work with and have always provided us with great input and assistance. Finally, I would like to thank Steven Iveson for wanting to participate in this collaboration. Your contribution to this book has truly raised its value and it has been a pleasure working with you. Steve We all stand on ‘the shoulders of giants’. We’ve both put a huge amount of time and effort into this book and every sentence requires research, reading, testing and time to understand and contextualise. None of that would be possible without the incredible information and tools we now have at our disposal. The contributions of countless people and entire generations, programs, movements, ideas and even cultures have all played a part. From the Internet to Ethernet to the road network and back to the Magna Carta; this book wouldn’t have been possible without them.

35 35


Thanks to the many who’ve taken the time to contribute to DevCentral (DC) to inform, educate and assist others, myself included. A special mention to Colin Walker (now with Extrahop) and these F5 staff members and DC contributors: Joe Pruitt (username: Joe) who created DevCentral, Aaron Hooley (username: hoolio) who’s made over twelve thousand posts on DC, Nitass Sutaveephamochanon (username: nitass) and Kevin Stewart. Again, thanks to Philip for making this book happen in the first place.

Feedback If you have any comments, corrections or feedback regarding this book, feel free to send an email to feedback@f5books.eu. Philip You are very welcome to connect on Linkedin. You can find my public profile at: https://www.linkedin.com/pub/philipj%C3%B6nsson/3a/680/810. Steve You can follow me on Twitter: @sjiveson, read my blogs at http://packetpushers.net/author/steven-iveson/ and you’re welcome to connect on Linkedin. You can also follow my work on GitHub: sjiveson and Docker Hub: itsthenetwork. You can also join this book’s Linkedin group by searching Linkedin for: ‘All Things F5’. This is an independent group that is not associated with F5.

36 36


1. Introduction Who is This Book for? This book is designed to provide the reader and student with everything they need to know and understand in order to pass the F5 TMOS Administration 201 exam and become a F5 Certified BIG-IP Administrator. All generic networking, application, protocol and F5 specific topics and elements found in the exam blueprint are covered in full and in detail. No prior knowledge is assumed and the book includes review summaries, over 350 diagrams, over 90 test questions and a number of lab exercises to aid understanding and assist in preparing for the exam. Even those attending official F5 training courses will find this book of benefit as those courses only cover the F5 specific elements of the curriculum.

How This Book is Organised Most readers should read and study this book from start to finish, front to back. As with the official F5 blueprint, things move from the simple and abstract to the more complex and detailed and each topic builds upon the knowledge gained in earlier ones. We’ve ordered the book’s chapters and sections to mostly reflect the order of that exam blueprint, although in a few cases where we’ve felt it’s more appropriate we’ve ignored it. Each chapter starts with a brief overview of the topics that will be covered and many end with a useful review summary as well as some simple questions to test your understanding. The chapters of the book and their contents are as follows; ▪

This chapter, Chapter 1 – Introduction provides the background on F5 Networks the company and its history and overviews of F5 terminology, technologies, hardware and software products.

Chapter 2 – The TMOS Administrator Exam describes the wider technical certification program, the exam and offers a list of useful additional study resources.

Chapter 3 - Building Your Own Lab Environment gives you everything you need in order to set up your own BIG-IP lab environment.

Chapter 4 - Introduction to LTM - Initial Access and Installation introduces you to the BIG-IP system and describes how you perform an initial setup.

Chapter 5 - Local Traffic Objects introduces you to the different local traffic objects such as nodes, pool members, pools and virtual servers. It also describes the different virtual server types.

Chapter 6 - Load Balancing Methods covers all of the different load balancing algorithms and the concept of Member vs. Node.

Chapter 7 - Monitors will in detail, describe all of the different monitors. Along with the many object statuses and states.

Chapter 8 - Profiles covers the profiles which you can assign to the virtual servers. We discuss the different profile types, but we also detail some of the more common ones.

Chapter 9 - Persistence describes what a stateless vs. stateful application is. It also covers all existing profiles and the benefit vs. drawbacks of each.

37 37


Chapter 10 - SSL Traffic introduces you to the different SSL modes that the BIG-IP system support along with some SSL certificate management.

Chapter 11 - NAT and SNAT will discuss how the BIG-IP system handles its adress translation and differences between NAT and SNAT.

Chapter 12 - High Availability describes what is needed to configure your BIG-IP environment in a HighAvailability setup and in detail, explain how the HA communication works.

Chapter 13 - The Traffic Management Shell (tmsh) covers the BIG-IP command line interface and how it is structured.

Chapter 14 - File Transfer teaches you how to transfer files to and from the BIG-IP system.

Chapter 15 - Selected Topics contain random subjects like iRules, AOM and iApps that describes what it is and what it can be used for.

Chapter 16 - Troubleshooting Hardware covers hardware troubleshooting tools such as EUD and log files in depth and explores instigating HA failover.

Chapter 17 – Troubleshooting Device Management Connectivity provides an in-depth review of areas related to remote management covering features and subjects such as DNS, packet filtering, Port Lockdown and many more. The ping and traceroute tools are introduced.

Chapter 18 – Troubleshooting and Managing Local Traffic steps through the process of identifying and resolving issues with local traffic and provides detail on the traffic processing order of operations.

Chapter 19 – Troubleshooting Performance moves on to observing and determining performance related issues and using related tools such as the packet capture program tcpdump.

Chapter 20 – Opening a Support Ticket With F5 explores how to best gather relevant information prior to raising a call, how to provide it to F5, selecting a suitable severity level and escalating cases.

Chapter 21 – Identify and Report Current Device Status covers general operational monitoring through, amongst others, the network map, dashboard, log files and iApps Analytics.

Chapter 22 – Device Maintenance offers information on local configuration backup and restoration, automated remote configuration archiving and dealing with TMOS software image upgrades in a HA environment. It also covers the F5 products BIG-IQ and Enterprise Manager.

The book also contains numerous notifications divided into five categories, as follows: Icon

Text

Description Warning

38 38

You will see this icon and text whenever you should proceed with caution. We'll use this when an instruction might have an impact on the system. Ensure you read this notice before proceeding.


Note

Used whenever additional information is provided to benefit your overall understanding of a topic.

Important

When we need to provide clarity and avoid misunderstanding we'll use this icon and text.

Exam Tip

This icon and text highlight information that is essential or important in order to pass the exam.

Recommendation

Used to indicate a personal recommendation based on our experience managing BIG-IP over many years.

F5 Networks the Company Created as F5 Labs in 1996* by Michael D. Almquist** (aka Mad Bomber and Squish,) a technical entrepreneur and programmer and Jeffrey S. Hussey, an investment banker. F5 released its first HTTP web server load balancing device: the BIG-IP Controller, in 1997. The company, head-quartered in Seattle, Washington since its inception, has grown rapidly to date (barring a lull during the dot.com collapse between 1999 and 2001) and has expanded its product offerings significantly. They now produce a wide range of dedicated hardware and virtualised appliance application delivery controllers (ADCs). As well as load balancing these can provide SSL offload, WAN acceleration, low and high level security functions, application acceleration, firewalling, SSL VPN, remote access and much more. Michael Almquist left the company in May 1998 over a year before the company went public on NASDAQ (symbol: FFIV) in June 1999 and was renamed F5 Networks. By mid-2005, industry analyst firm Gartner reported F5 had captured the highest share of the overall ADC market and by late 2016*** the company earned almost $2 billion in annual revenue and employed over 4,500 people in 59 locations around the world, 1200 in R&D. Refreshingly, they paid tax of $184m for their financial year 2016 in stark contrast to the likes of Google (who have paid £200m on profits (not revenue) of apparently over £7b since 2000 in the UK), Cisco and Starbucks. The company has no long term debt and assets of over $2.3 billion. Services earned just over 52% of revenues compared to products, with the largest sales market being the Americas, followed by EMEA, APAC and Japan. Research and development expenses for the financial year were $334m. According to Netcraft®, in May 2009, 4.26% of all websites and around 3.8% of the top million sites were being served through F5 BIG-IP devices. A look at this Netcraft page: http://uptime.netcraft.com/up/reports/performance/Fortune_100, shows that on 7th February 2014, 20% of the US Fortune 100’s public websites were served through F5 BIP-IP ADCs including those of Bank of America, Dell, Disney, Lehman Brothers, Lockheed Martin, Wachovia and Wells Fargo. The company’s longest servicing President and CEO was John McAdam who held these roles for fifteen years until he was briefly replaced by Manny Rivelo. Manny took the reigns in July 2015 for six months until John McAdam returned on an interim basis. He was finally replaced by François Locoh-Donou in April 2016.

39 39


The company name was inspired by the 1996 movie Twister, in which reference is made to the fastest and most powerful tornado on the Fujita Scale: F5. Significant technical milestones and business events in F5 Networks’ history include; 1895 – Nortel® is founded (as Northern Telecom Limited) 1995 – Brocade® is founded 1996 – F5 is incorporated (February) 1996 – Cisco® launches LocalDirector; technology based on its acquisition of Network Translation Incorporated that same year (the PIX® firewall platform also sprung from this acquisition) 1996 – Foundry Networks® is founded (originally called Perennium Networks and then StarRidge Networks, renamed Foundry in 1997) (later to be acquired by Brocade in 2008) 1996 – Alteon Networks® is founded (later to be acquired by Nortel in 2000) 1997 – F5 Launches its first BIG-IP Controller (July) 1997 – ArrowPoint Communications® is founded by Chin-Cheng Wu (later to be acquired by Cisco in 2000) 1998 – F5 Launches the 3DNS Controller (September) 1998 – Reactivity is founded 1998 – NetScaler is founded 1999 – F5 Goes public on NASDAQ (June) 2000 – Cisco acquires ArrowPoint Communications (at a cost of $5.7b) for their content switching technology which they release as the Content Services Switch (CSS) range the same year but fails to develop the product further 2000 – Redline Networks® is founded (later to be acquired by Juniper in 2005) 2000 – FineGround Networks® founded (later to be acquired by Cisco in 2005) 2000 – MagniFire Websystems® founded (later to be acquired by F5 in 2004) 2000 – Peribit Networks® (WAN optimisation) founded (later to be acquired by Juniper® in 2005) 2000 – Nortel acquire Alteon Networks (at a cost of $6b in stock) (the Alteon application delivery assets later to be acquired by Radware® in 2009) 2001 – The iControl XML-based open API is introduced by F5 with v4 2002 – v4.5 Released and includes the UIE and iRules 2002 – Acopia Networks® founded by Chin-Cheng Wu (who also founded ArrowPoint Communications in 1997) (later to be acquired by F5 in 2007) 2002 – Crescendo Networks® founded (later to have its IP acquired by F5 in 2011) 2003 – F5’s DevCentral Community and technical reference website launched 2003 – F5 Acquires uRoam (at a cost of $25m) for its FirePass technology (SSL VPN, application and user security) 2004 – F5 Acquires MagniFire Websystems (at a cost of $29m) for its web application firewall (WAF) technology TrafficShield, which forms the basis of the ASM product 2004 – F5 releases TMOS v9 and TCL-based iRules 2004 – Zeus Technology® releases Zeus Traffic Manager 2005 – F5 Acquires Swan Labs® (at a cost of $43m) for its WAN optimisation technology (WANJet) 2005 – Juniper Networks purchases Peribit Networks (WAN optimisation) and Redline Networks (ADCs) at a cost of $337m and $132m respectively 2005 – Cisco acquires FineGround Networks (at a cost of $70m) and integrates its technology with the Catalyst switch line to create the ACE product

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

40 40


2005 – Cisco launch numerous Application-Oriented Networking (AON) products to support the convergence of ‘intelligent networks’ with application infrastructure 2005 – Citrix acquires NetScaler (at a cost of $300m) 2006 – Lori MacVittie joins F5 2007 – Don MacVittie joins F5 2007 – A10 Networks® launches its AX Series family of ADC appliances 2007 – F5 Acquires Acopia Networks (at a cost of $210m) for its file virtualisation technology, which is later rebranded as its ARX range 2007 – Cisco acquires Reactivity (at a cost of $135m) for its XML gateway technology, which they launch as the ACE XML Gateway product the same year 2008 – F5’s VIPRION modular, blade based hardware is released 2008 – Juniper discontinues it’s DX line of load balancers based on the Redline Networks technology acquired in 2005 2008 – LineRate Systems® is founded 2008 – Foundry Networks is acquired by Brocade (at a cost of $2.6b (Brocade originally offered $3b)) 2009 – Nortel ceases operations 2009 – Radware acquire Nortel’s Alteon application delivery assets (at a cost of $18m) 2009 – F5 Releases TMOS and LTM v10 2010 – Cisco ACE XML Gateway sales end 2010 – Cisco Application-Oriented Networking (AON) products sales end 2011 – F5 Releases TMOS and LTM v11 2011 – F5 Acquires Crescendo Networks intellectual property (at a cost of $5.6m) for its application acceleration technology 2011 – Riverbed® acquires Zeus Technology (at a cost of $110m) for its software based ADC product Zeus Traffic Manager and rebrands it as Stingray (rebranded again as SteelApp™ in 2014) 2011 – Cisco CSS sales end 2012 – F5 Acquires Traffix Systems® (at a cost of $140m) for its mobile/cellular 4G/LTE and Diameter signalling protocol switching technology 2012 – Riverbed and Juniper form a partnership in WAN optimisation and application delivery products, with Juniper licensing the Riverbed Stingray (later renamed SteelApp™) software ADC and Riverbed integrating Steelhead Mobile technology into Juniper’s JunOS Pulse client 2012 – Cisco end development of their ACE load balancing products and partner with Citrix to recommend NetScaler as their preferred product 2013 – F5 Acquires LineRate Systems (at a cost of $125m) for its layer seven and application delivery software defined networking technology 2013 – F5 Acquires Versafe® (at an unknown cost) for its mobile and browser security and monitoring products (the TotALL suite) 2013 - The iControl REST open API is introduced by F5 with TMOS v11.4 2013 – F5 Becomes an OpenStack corporate sponsor 2013 – F5 Launches the Synthesis frame work and introduces SDAS: Software-Defined Application Services™ 2013 – F5 Reduces the price of the 10Mb limited Lab Edition of BIG-IP VE (including LTM, GTM, AFM, ASM, AVR, PSM, WAM and WOM) from around $2000 to just $95, in a gutsy move to capture market share 2014 – Riverbed rename Stingray (formerly Zeus Traffic Manager) to SteelApp™

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

41 41


2014 – F5 Acquire Defense.Net® (at an unknown cost) for its cloud-based DDoS mitigation technology and services 2014 - F5 Launches its Silverline cloud-based security service in the US, powered by it’s earlier Defense.Net acquisition 2015 - F5 Launches the LineRate Point Load Balancer 2015 - F5 Launches Silverline in EMEA 2015 - Manny Rivelo becomes President and CEO as John McAdam steps down after fifteen years 2015 - Manny Rivelo leaves and John McAdam resumes his roles as President and CEO 2016 - François Locoh-Donou becomes President and CEO 2016 - F5 is named a leader in the Gartner Magic Quadrant for application delivery controllers for the 10th year running 2017 - F5 Launches Herculon security appliances and the DDoS Hybrid Defender and SSL Orchestrator products that run upon them. The Silverline WAF Express service and Container Connector are also launched

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Having gained a leading market share in the load balancing and local traffic management enterprise market for some time F5 is now targeting and looking for growth in additional markets, supported and evidenced by their ever expanding product range. These markets include; security (AFM, ASM and APM), cloud (AWS etc.), mobile signalling (Traffix) and acceleration, virtualisation and SSL VPN and RAS. *This article suggests it was actually late 1995: http://www.udel.edu/PR/Messenger/98/1/cyber.html although it was indeed early 1996 when the company was incorporated. **You’ll find in many sources that Michael Almquist has effectively been written out of the company’s history. ***Data taken from the company’s September 2016 financial year end 10K annual report found here.

42 42


F5 Terminology Before we get into the exam specifics we think it’s worthwhile exploring the terminology surrounding F5 Networks’ products (again). This isn’t tested on the exam in any way but without an understanding of the terms you’ll find in this book and elsewhere and particularly how they relate to F5’s hardware and software, things will be harder for you than they need to be. To that end, the next three sections will explore the primary marketing term for the overall product range and then move on to the terms used in relation to the hardware and software (some of which are the same!)

What is BIG-IP? So, just what is BIG-IP? It’s confusing; back in the day, BIG-IP was the single name for everything and all you had was the BIG-IP Controller. Now, things are a bit different and you have the application switch hardware, virtual edition, TMOS, TMM, LTM, APM and all the rest. To add to the confusion BIG-IP is quite often used interchangeably with TMOS or even just F5. As specific and well, simply pedantic I can be I still catch myself saying things like “check your F5’s logs…” or “what’s the CPU load on this BIG-IP.” So, back to the question, what is BIG-IP? Well, simply put it’s all of the things I’ve mentioned so far; it’s an allencompassing term for the hardware, the Virtual Edition container, TMOS (the software components), TMM (a component of TMOS), LTM (which runs within TMM), APM and all the other modules.

BIG-IP Hardware When discussing BIG-IP hardware, things become rather more specific but keep in mind that for many hardware components there will be a related software component that runs on top of it, which has the same name. The primary hardware elements and their purpose are as follows; ▪

Traffic Management Microkernel (TMM); traffic processing hardware components as follows; o A L2 switch module (possibly using network processing NICs) o Packet Velocity ASIC(s) (PVAs) or embedded PVA (ePVA) using Field-programmable gate arrays (FPGAs) o FPGAs providing ePVA, SYN check and other functions in hardware o Dedicated SSL encryption or FIPS hardware o Dedicated compression hardware (in some models) o TMM uses all CPUs (although one is shared with the HMS) and almost all system RAM, a small amount being provisioned for the HMS. TurboFlex™; available on iSeries appliances only, provides FPGA driven, user selectable pre-packaged optimisations that tightly integrate with other hardware and software components and free CPU resources for other tasks. Examples of supported optimisation profiles include layer 4 offload, denial-of service (DoS) functions and tunneling encapsulation. Host Management Subsystem (HMS); responsible for system management and administration functions and runs a version of CentOS (Community enterprise Operating System) Linux (which includes the SELinux feature). The HMS uses a single CPU (shared with TMM) and is assigned a dedicated provision of the overall system RAM, the rest being assigned to TMM. Always On Management (AOM); provides additional ‘lights out’ management of the HMS via a dedicated management processor as well as layer 2 switch management and other supporting functions for TMM.

43 43


Baseboard Management Controller (BMC); another subsystem with a dedicated controller that is independent of the primary TMM and HMS components, which provides for out-of-bound (or so called ‘sideband’) management and monitoring. The BMC is the primary constituent of the Intelligent Platform Management Interface (IPMI) computer interface specifications and protocol which we’ll cover in the BIG-IP Software - TMOS section.

BIG-IP Software – TMOS F5 Network’s Traffic Management Operating System (TMOS) is, first and foremost and for the sake of clarity, NOT an individual operating system. It is the software foundation for all of F5’s network or traffic (not data) products; physical or virtual. TMOS almost seems to be a concept rather than a concrete thing when you first try to understand it. I’ve struggled to find a truly definitive definition of TMOS in any manual or on any website. So, what is TMOS? It’s not too tough after all, really; TMOS encompasses a collection of operating systems and firmware, all of which run on BIG-IP hardware appliances or within the BIG-IP Virtual Edition. BIG-IP and TMOS (and even TMM) are often used interchangeably where features, system and feature modules are concerned. This can be confusing; for instance, although LTM is a TMOS system module running within TMM, it’s commonly referred to as BIG-IP LTM. I suspect we have the F5 marketing team to thank for this muddled state of affairs. TMOS and F5’s so-called ‘full application proxy’ architecture was introduced in 2004 with the release of v9.0.

44 44


This is essentially where the BIG-IP software and hardware diverged; previously the hardware and software were simply both referred to as BIG-IP (or BIG-IP Controller). Now, the hardware or ‘platform’ is BIG-IP, and the software TMOS. Anything capable of running TMOS and supporting its full proxy counts as a BIG-IP so the virtualised version of TMOS is called BIG-IP Virtual Edition(VE) rather than TMOS VE. Where the VE editions are concerned, just the TMM and HMS software components of TMOS are present (more details soon). The primary software elements of BIG-IP, collectively known as TMOS, encompass all of these things; ▪

TMM; o

o o o o ▪

HMS; this runs a modified version of the CentOS Linux operating system and provides the various interfaces and tools used to manage the system such as the WebGUI, tmsh CLI, DNS client, SNMP and NTP. The HMS also contains an SSL stack (known as the COMPAT stack): OpenSSL, which can also be used by TMM where necessary.

Local Traffic Manager (LTM); this and other ‘feature’ modules such as APM, ASM and DNS (formerly GTM) expose specific parts of TMM functionality when licensed. They are typically focussed on a particular type of service (load balancing, authentication and so on).

AOM; lights out system management accessible through the management network interface and serial console.

Intelligent Platform Management Interface (IPMI); IPMI is a hardware-level interface specification and protocol supported on BIG-IP iSeries hardware. It allows for out of band monitoring and management of a system independently of (or without) an operating system and when the system is ‘off’. Like AOM, IPMI functions are accessible through the management network interface and serial console.

Maintenance Operating System (MOS); disk management, file system mounting and maintenance.

End User Diagnostics (EUD); performs BIG-IP hardware tests.

45 45

Software in the form of an operating system, system and feature modules (such as LTM), other modules (such as iRules) and multiple network ‘stacks’ and proxies; FastL4, FastHTTP, Fast Application Proxy, TCPExpress, IPv4, IPv6 and SCTP. Software in the form of the interface to and the firmware that operates the dedicated SSL and other cards and hardware. A ‘native’ SSL stack. Interfaces to the HMS. TurboFlex FPGA firmware


TMOS Components in Detail Let’s explore some of the TMOS components in a little more detail. Traffic Management Microkernel (TMM) TMM is the core component of TMOS as it handles all network activities and communicates directly with the network switch hardware (or vNICs for VE). TMM also controls communications to and from the HMS. Local Traffic Manager (LTM) and other modules run within the TMM. TMM is single threaded until TMOS v11.3; on multi-processor or multi-core systems, Clustered Multi-Processing(CMP) is used to run multiple TMM instances/processes, one per core. From v11.3 two TMM processes are run per core, greatly increasing potential performance and throughput. TMM shares hardware resources with the HMS (discussed next) but has access to all CPUs and the majority of RAM.

46 46


Host Management Subsystem (HMS) The Host Management Subsystem runs a modified version of the CentOS Linux operating system and provides the various interfaces and tools used to manage the system such as the WebGUI, Advanced (Bash) Shell, tmsh CLI, DNS client, SNMP and NTP client and/or server. The HMS can be accessed through the dedicated management network interface, TMM switch interfaces or the serial console (either directly or via AOM). The HMS shares hardware resources with TMM but only runs on a single CPU and is assigned a limited amount of RAM. Always On Management (AOM) The AOM (another dedicated hardware subsystem) allows for ‘lights out’ power management of and console access to the HMS via the serial console or using SSH via the management network interface. AOM Is available on nearly all BIG-IP hardware platforms including the Enterprise Manager 4000 product, but not on VIPRION. Note AOM ‘shares’ the management network interface with the HMS. Maintenance Operating System (MOS) MOS is installed in an additional boot location that is automatically created when TMOS version 10 or above is installed. MOS, which runs in RAM, is used for disk and file system maintenance purposes such as drive reformatting, volume mounting, system re-imaging and file retrieval. MOS also supports network access and file transfer. MOS is entered by interrupting the standard boot process via the serial console (by selecting TMOS maintenance at the GRUB boot menu) or booting from USB media. The grub_default -d command can be used to display the MOS version currently installed. Only one copy of MOS is installed on the system (taken from the latest TMOS image file installed) regardless of the number of volumes present. End User Diagnostics (EUD) EUD is a software program used to perform a series of BIG-IP hardware tests – accessible via the serial console only on system boot. EUD is run from the boot menu or via supported USB media.

47 47


TMOS Planes The following diagram provides an overview of the operational planes within TMOS and where each function and element resides;

BIG-IP Hardware Platforms BIG-IP Application switch hardware comes in a wide range of fixed and modular models. Both the physical hardware and the Virtual Edition are considered a form of application delivery platform; in other words, they run TMOS. Hardware provides superior performance and throughput using Field-Programmable Gate Array (FPGA) circuitry, specialised high performance network interfaces and optimised data paths. Further benefits are gained from the inclusion of additional dedicated hardware for SSL processing (all models) and compression processing (higher end models only) which provide much higher performance than commodity processors. Due to this higher performance the number of TMOS modules you can install on an appliance is also typically quite high, which lends itself well to functional consolidation. Clearly more suited to high workloads, hardware appliances are therefore typically placed in a logically central position in the network to maximise their benefits and ensure the maximum amount of traffic is easily processed through them.

48 48


The built-in AOM and BMC subsystems (covered in detail in the earlier F5 Terminology section) are a useful inclusion and vendor support is also simplified as both the hardware and software are supported and designed by the same vendor. Of course, for all these benefits there are some downsides, the primary ones being cost and a lack of flexibility. The hardware is an expensive upfront cost, however, make good use of their high performance and capacity and the cost is low compared to their true value, over time. This is a primary design consideration, the higher the throughput (within suitable limits) the greater the potential return on your investment (ROI). Moving to the second and related drawback, with the exception of VIPRION, hardware appliances in general simply don’t scale well. If you need to do more than your current device has capacity for you have to (rip and) replace it with a larger device (known as vertical scaling). Equally, future (estimated) capacity requirements must be incorporated in the original purchase, which may mean the hardware is not used to anything like its full capacity for a significant time. These issues can be mitigated to some extent through the use of tiered designs, horisontal scaling made possible through device groups and related HA features and/or segmentation and multi-tenancy with vCMP, route domains and the like.

Appliances You don’t need to know this for the exam but its still useful to have an understanding of the physical BIG-IP platforms. They all (with the exception of VIPRION systems detailed in the next section) have a minimum specification of; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

LCD Panel & Physical Controls (some models now have a colour touch-panel) Intel dual core CPU Dual power supply capable (AC and DC) Gigabit Ethernet copper and fibre interfaces Front mounted LCD panel Dedicated management network interface Serial console interface Failover/HA serial interface Front to back airflow Software HTTP compression Hardware SSL encryption via ‘Cryogen’ card 8GB RAM 500GB HDD Up to 4,000 2K SSL transactions per second 5Gbps Layer four and layer seven throughput 4Gbps Bulk encryption 425,000 Layer seven requests per second 150,000 Layer four connections per second

49 49


Specifications increase up to the following for the higher end models (excluding the VIPRION platforms discussed shortly); ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Intel 12 core CPUs 40GbE Fibre interfaces Hardware compression (up to 40Gbps) 128GB RAM Dual 10,000RPM 1TB HHDs with RAID (SSDs are an option) Up to 240,000 2K SSL transactions per second (TPS) 84Gbps Layer four throughput 40Gbps Layer seven throughput 40Gbps Bulk encryption 4,000,000 Layer seven requests per second 1,500,000 Layer four connections per second

The only hot swappable components are the power supplies (assuming two are installed), SFP network interfaces and fan tray (in some models only). Hard disks are not hot swappable even on models that support RAID. FIPS Compliant and Turbo SSL versions of some models are also available. Here’s a quick rundown of the models available at the time of publication, from most powerful to least;

12250v L7 Requests Per Second: 4M L4 Connections Per Second: 1.5M Throughput L4/L7: 84/40Gb Bulk Encryption: 40Gb vCMP Capable: Yes TurboFlex: No Hardware Compression: 40Gb Processors/Cores: 1/12 Memory: 128GB Hard Drive(s): 1x 800GB SSD 10GB Interfaces: Yes 40Gb Interfaces: Yes

50 50

i10800 L7 Requests Per Second: 3.5M L4 Connections Per Second: 1.5M Throughput L4/L7: 160/80Gb Bulk Encryption: 40Gb vCMP Capable: Yes TurboFlex: Yes - Tier 3 Hardware Compression: 40Gb Processors/Cores: 1/8 Memory: 128GB Hard Drive(s): 1x 480GB SSD 10GB Interfaces: Yes 40Gb Interfaces: Yes


i10600 L7 Requests Per Second: 2.1M L4 Connections Per Second: 1M Throughput L4/L7: 160/80Gb Bulk Encryption: 40Gb vCMP Capable: No TurboFlex: No Hardware Compression: No Processors/Cores: 1/8 Memory: 128GB Hard Drive(s): 1x 480GB SSD 10GB Interfaces: Yes 40Gb Interfaces: Yes

10350v/-N/-F L7 Requests Per Second: 3M L4 Connections Per Second: 1.2M Throughput L4/L7: 84/40Gb Bulk Encryption: 24Gb FIPS Option: Yes for 10350v-F vCMP Capable: Yes TurboFlex: No Hardware Compression: 24Gb Processors/Cores: 1/10 Memory: 128GB Hard Drive(s): 1x 800GB SSD 10GB Interfaces: Yes 40Gb Interfaces: Yes

10255v/10250v/10200v-SSL L7 Requests Per Second: 2M L4 Connections Per Second: 1M Throughput L4/L7: 80/40Gb Bulk Encryption: 22Gb/22Gb/33Gb FIPS Option: Yes for 10200v vCMP Capable: Yes TurboFlex: No Hardware Compression: 24Gb Processors/Cores: 1/6 Memory: 48GB Hard Drive(s): 2x 400GB/1x 400GB SSD/2x 1TB 10GB Interfaces: Yes 40Gb Interfaces: Yes

10055s/10050s/10000s L7 Requests Per Second: 1M L4 Connections Per Second: 0.5M Throughput L4/L7: 80/40Gb Bulk Encryption: 22Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/6 Memory: 48GB Hard Drive(s): 2x 400GB/1x 400GB SSD/2x 1TB 10GB Interfaces: Yes 40Gb Interfaces: Yes

51 51


i7800 L7 Requests Per Second: 3M L4 Connections Per Second: 1.1M Throughput L4/L7: 80/40Gb Bulk Encryption: 20Gb vCMP Capable: Yes TurboFlex: Tier 3 Hardware Compression: 20Gb Processors/Cores: 1/6 Memory: 96GB Hard Drive(s): 1x 480GB SSD 10GB Interfaces: Yes 40Gb Interfaces: Yes

i7600 L7 Requests Per Second: 1.8M L4 Connections Per Second: 750K Throughput L4/L7: 80/40Gb Bulk Encryption: 20Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/6 Memory: 96GB Hard Drive(s): 1x 480GB SSD 10GB Interfaces: Yes 40Gb Interfaces: Yes

7255v/7250v/7200v-SSL L7 Requests Per Second: 1.6M L4 Connections Per Second: 775K Throughput L4/L7: 40/20Gb Bulk Encryption: 18/18/19Gb FIPS Option: Yes for 7200v vCMP Capable: Yes TurboFlex: No Hardware Compression: 18Gb Processors/Cores: 1/4 Memory: 32GB Hard Drive(s): 2x 1TB/1x 400GB SSD/2x 400GB SSD 10GB Interfaces: Yes 40Gb Interfaces: No

7055s/7050s/7000s L7 Requests Per Second: 800K L4 Connections Per Second: 390K Throughput L4/L7: 40/20Gb Bulk Encryption: 18Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/4 Memory: 32GB Hard Drive(s): 2x 1TB/1x 400GB SSD/2x 400GB SSD 10GB Interfaces: Yes 40Gb Interfaces: No

52 52


i5800 L7 Requests Per Second: 1.8M L4 Connections Per Second: 800K Throughput L4/L7: 60/35Gb Bulk Encryption: 20Gb vCMP Capable: Yes TurboFlex: Tier 3 Hardware Compression: 20Gb Processors/Cores: 1/4 Memory: 48GB Hard Drive(s): 1x 480GB SSD 10GB Interfaces: Yes 40Gb Interfaces: Yes

i5600 L7 Requests Per Second: 1.1M L4 Connections Per Second: 500K Throughput L4/L7: 60/35Gb Bulk Encryption: 15Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/4 Memory: 48GB Hard Drive(s): 1x 480GB SSD 10GB Interfaces: Yes 40Gb Interfaces: Yes

5250v/5200v L7 Requests Per Second: 1.5M L4 Connections Per Second: 700K Throughput L4/L7: 30/15Gb Bulk Encryption: 12Gb FIPS Option: Yes for 5250v vCMP Capable: Yes TurboFlex: No Hardware Compression: 12Gb Processors/Cores: 1/4 Memory: 32GB Hard Drive(s): 1x 1TB/400GB SSD 10GB Interfaces: Yes 40Gb Interfaces: No

5050s/5000s L7 Requests Per Second: 750K L4 Connections Per Second: 350K Throughput L4/L7: 30/15Gb Bulk Encryption: 12Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/4 Memory: 32GB Hard Drive(s): 1x 1TB/400GB SSD 10GB Interfaces: Yes 40Gb Interfaces: No

53 53


i4800 L7 Requests Per Second: 1.1M L4 Connections Per Second: 450K Throughput L4/L7: 20/20Gb Bulk Encryption: 15Gb vCMP Capable: No TurboFlex: Tier 2 Hardware Compression: 10Gb Processors/Cores: 1/4 Memory: 32GB Hard Drive(s): 1x 500GB 10GB Interfaces: Yes 40Gb Interfaces: No

i4600 L7 Requests Per Second: 650K L4 Connections Per Second: 250K Throughput L4/L7: 20/20Gb Bulk Encryption: 10Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/4 Memory: 32GB Hard Drive(s): 1x 500GB 10GB Interfaces: Yes 40Gb Interfaces: No

4200v L7 Requests Per Second: 850K L4 Connections Per Second: 300K Throughput L4/L7: 10/10Gb Bulk Encryption: 8Gb vCMP Capable: No TurboFlex: No Hardware Compression: 8Gb Processors/Cores: 1/4 Memory: 16GB Hard Drive(s): 1x 500GB 10GB Interfaces: Yes 40Gb Interfaces: No

4000s L7 Requests Per Second: 425K L4 Connections Per Second: 150K Throughput L4/L7: 10/10Gb Bulk Encryption: 8Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/4 Memory: 16GB Hard Drive(s): 1x 500GB 10GB Interfaces: Yes 40Gb Interfaces: No

54 54


i2800 L7 Requests Per Second: 650K L4 Connections Per Second: 250K Throughput L4/L7: 10/10Gb Bulk Encryption: 8Gb vCMP Capable: No TurboFlex: Tier 1 Hardware Compression: 5Gb Processors/Cores: 1/2 Memory: 16GB Hard Drive(s): 1x 500GB 10GB Interfaces: Yes 40Gb Interfaces: No

i2600 L7 Requests Per Second: 350K L4 Connections Per Second: 125K Throughput L4/L7: 10/10Gb Bulk Encryption: 5Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/2 Memory: 16GB Hard Drive(s): 1x 500GB 10GB Interfaces: Yes 40Gb Interfaces: No

2200s L7 Requests Per Second: 425K L4 Connections Per Second: 150K Throughput L4/L7: 5/5Gb Bulk Encryption: 4Gb vCMP Capable: No TurboFlex: No Hardware Compression: 4Gb Processors/Cores: 1/2 Memory: 8GB Hard Drive(s): 1x 500GB 10GB Interfaces: Yes 40Gb Interfaces: No

2000s L7 Requests Per Second: 212K L4 Connections Per Second: 75K Throughput L4/L7: 5/5Gb Bulk Encryption: 4Gb vCMP Capable: No TurboFlex: No Processors/Cores: 1/2 Memory: 8GB Hard Drive(s): 1x 500GB 10GB Interfaces: Yes 40Gb Interfaces: No

You’ll find further technical details here: https://www.f5.com/pdf/products/big-ip-platforms-datasheet.pdf.

55 55


VIPRION VIPRION is F5 Networks’ high density hardware consolidation platform; the Cisco Catalyst 6500 of the BIG-IP range if you will. The four VIPRION models are modular chassis with capacity for up to eight hot-swappable blade modules, all featuring hardware compression. The larger 16 rack unit (RU) high 4800 can accommodate dual duodecad (12) core CPU full-width blades, the smaller 4RU 2400 holds single quad core CPU half-width blades. The features and benefits of these chassis are similar to those of other modular, expandable network devices; ▪

Hot-swappable blades, multiple power supplies and field replaceable components increase uptime and provide a high level of redundancy Consolidation of multiple devices in a high density form factor reduces and/or fixes hardware, environmental, operational and management costs High interface density and capacity Non-disruptive capacity scaling Easy expansion capabilities (aka vertical scaling or scale up)

▪ ▪ ▪ ▪

You don’t need to know this for the exam but, if you’re interested, the technical highlights of the VIPRION platforms include; ▪ ▪

Load is dynamically shared across all available blades All physical interfaces on all blades are fully meshed using high-speed bridge Field Programmable Gate Arrays (FPGAs) The entire system is managed through a single interface Everything from firmware, software and configuration settings is automatically duplicated from the primary blade to every other blade The SuperVIP feature allows a VIP to span multiple blades Up to 256GB RAM per blade 100Gb Ethernet interfaces Up to 160,000 2K RSA SSL transactions per second (TPS) Up to 140Gbps layer four and seven throughput per second, per blade Up to 80Gbps bulk encryption per blade Up to 5M Layer seven requests per second, per blade Up to 2.9M Layer four connections per second, per blade Up to 80Gb hardware compression per blade

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Here’s a quick rundown of the VIPRION chassis and blade models available at the time of publication, from most powerful to least;

56 56


57 57

4800 Chassis Rack Units: 16 Slots: 8 Power Supplies: 4 Fan Trays: 2 Supported Blades: 4450, 4340N & 4300

4480 Chassis

4450 Blade

4340N Blade

L7 Requests Per Second: 5M L4 Connections Per Second: 2.9M Throughput L4/L7: 140/140Gb Bulk Encryption: 80Gb vCMP Capable: Yes Processors/Cores: 2/12 Memory: 256GB Hard Drive(s): 1x 1.2TB SSD 10/40/100GB Interfaces: Yes/Yes/Yes

L7 Requests Per Second: 2M L4 Connections Per Second: 1.1M Throughput L4/L7: 80/40Gb Bulk Encryption: 20Gb vCMP Capable: Yes Processors/Cores: 2/6 Memory: 96GB Hard Drive(s): 1x 600GB 10/40/100GB Interfaces: Yes/Yes/No

Rack Units: 7 Slots: 4 Power Supplies: 4 Fan Trays: 1 Supported Blades: 4450, 4340N & 4300


4300 Blade L7 Requests Per Second: 2.5M L4 Connections Per Second: 1.4M Throughput L4/L7: 80/40Gb Bulk Encryption: 20Gb vCMP Capable: Yes Processors/Cores: 2/6 Memory: 48GB Hard Drive(s): 1x 600GB 10/40/100GB Interfaces: Yes/Yes/No

2400 Chassis

2200 Chassis

Rack Units: 4 Slots: 4 Power Supplies: 2 Fan Trays: 1 Supported Blades: 2250 & 2150

Rack Units: 2 Slots: 2 Power Supplies: 2 Fan Trays: 1 Supported Blades: 2250 & 2150

2250 Blade

2150 Blade

L7 Requests Per Second: 2M L4 Connections Per Second: 1M Throughput L4/L7: 155/80Gb Bulk Encryption: 36Gb vCMP Capable: Yes Processors/Cores: 1/10 Memory: 64GB Hard Drive(s): 1x 800GB SSD 10/40/100GB Interfaces: Yes/Yes/No

L7 Requests Per Second: 1M L4 Connections Per Second: 400K Throughput L4/L7: 40/18Gb Bulk Encryption: 9Gb vCMP Capable: Yes Processors/Cores: 1/4 Memory: 32GB Hard Drive(s): 1x 400GB SSD 10/40/100GB Interfaces: Yes/No/No

You’ll find further technical details here: https://www.f5.com/pdf/products/viprion-overview-ds.pdf.

58 58


Herculon The Herculon range was released in 2017 with the DDoS Hybrid Defender and SSL Orchestrator products. Despite being declared purpose-built dedicated security appliance products the hardware platforms at least are the i10800, i5800 and i2800 products. All these appliances support and rely upon TurboFlex for FPGA driven packet processing optimisations focused on the tasks they are designed to handle. The genuinely purpose-built element of these products is the simplified visual user interface and highly focused functionality. They also feature significant integration with dynamic external services such as IP Intelligence, F5’s Security Operations Center (SOC), Platform Security Team, Security Incident Response Team (SIRT) and 24x7 customer support.

BIG-IP Virtual Edition (VE) BIG-IP Virtual Edition (VE) provides a modern and lightweight alternative to purchasing hardware appliances. VE Has been available since TMOS v10.1 and supports all but one feature module, Enterprise Manager, BIG-IQ and Edge Gateway. It is available at lower cost to hardware, with a wide variety of throughput levels (up to 40G now); providing licensing flexibility and the ability to use a ‘pay as you grow’ model. Information on which products are supported on which hypervisors can be found here: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/vesupported-hypervisor-matrix.html. You also benefit from the various advantages of using virtualisation in general and take advantage of the various methodologies, features and efficiencies of orchestration, cloud services and micro-services. Availability on Amazon Web Services (AWS) and other cloud providers allows for yet more (potential) cost control and flexibility. Of course, you lose the performance of hardware acceleration (particularly for SSL/TLS) but you don’t have to initially over-specify hardware to accommodate future growth or peaks in demand.

Keep in mind VE performance is highly dependent on the host hardware and hypervisor software used.

Potentially poor SSL/TLS performance is slowly being eliminated with recent advances and contemporary features now available with commodity Intel processors. It’s argued that network performance is a bottleneck introduced by most hypervisors and that’s probably true at present but we don’t see this being an issue for too much longer as the vendors focus on it and even now this is only an issue if your traffic profile includes a large number of short lived connections. These hypervisors are supported; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Citrix XenServer (v5.6 sp2 and 6.0) Microsoft Hyper-V on Windows® 2008 R2 (Fully supported in TMOS v11.3.0) VMWare vCloud Director v1.5 onwards VMWare ESX/ESXi/vSphere v4.0 onwards Linux KVM (From TMOS v11.3) Community Xen (From TMOS v11.3) OpenStack (From TMOS v12.1.1) Amazon Web Services (AWS) (From TMOS v11.4.1) Microsoft Azure (From TMOS v12.0.0) Google Cloud Platform (From TMOS v13.0.0)

59 59


BIG-IP Features not available in the Virtual Edition include; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

CMP (until TMOS v11.3) Spanning Tree Protocols (vSwitches don’t run STP) Link Aggregation Control Protocol (LACP) – but Trunking is still available The hard-wired fail-over functionality and interface Federal Information Processing Standards (FIPS) 140-2 compliance (specific hardware is required) Interface mirroring The Serial console interface Always On Management (AOM) Baseboard Management Controller (BMC) and Intelligent Platform Management Interface (IPMI) TurboFlex Use of more than 4GB of memory (until TMOS v11.3) Use of more than 16 vCPUs Throughput of more than 1Gb (until TMOS v11.4) The Link Controller (LC) module Advanced SSL functions Advanced TCP profile settings

A free trial is available here: https://www.f5.com/trial/big-ip-trial.php.

The Different F5 Modules, Products & Services F5 Have an ever increasing and diverse set of products, modules and services. Local Traffic Manager (LTM) remains the ‘core’ product, with many other modules requiring it in order to work. However, F5’s expansion into the security market in particular, means there is now significant diversity in the product line and services unrelated to BIG-IP (such as cloud-based DDoS protection) are now a prominent part of the mix. This section provides a brief overview of nearly all the currently software and services available; we’ve already covered the hardware. You’ll note that LTM is not listed as it is discussed in considerable detail in the BIG-IP Administration chapter.

Overview These are the modules available; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Application Acceleration Manager (AAM) - web acceleration and WAN optimisation Access Policy Manager (APM) - access security including VPN, SSO and AAA Advanced Firewall Manager (AFM) - high performance firewall Application Visibility and Reporting (AVR)/Analytics - historical and near time statistics and metrics Application Security Manager (ASM) - web application firewall BIG-IQ - BIG-IP device, license, configuration, cloud and security management and orchestration Carrier Grade NAT (CGNAT) - highly optimised network address translation Edge Gateway - remote access including SSL VPN Enterprise Manager (EM) - BIG-IP device management DDoS Hybrid Defender - dynamic, high performance traffic analysis, DDoS identification and mitigation supported by various F5 services DNS - global server load balancing (GSLB) Link Controller (LC) - management, aggregation and monitoring of multiple internet connections (links)

▪ ▪

60 60


Policy Enforcement Manager (PEM) - mobile network subscriber and traffic reporting, management and control Secure Web Gateway (SWG) - forward proxy and web access gateway used in combination with the WebSense service and APM SSL Orchestrator - high-performance decryption and encryption of outbound SSL/TLS traffic DDoS Hybrid Defender - multi-layered detection of and defense against network and application layer attacks

▪ ▪ ▪

Services ▪

IP Intelligence Service - constantly updated database of IP addresses known to be used for malicious activities MobileSafe - corporate mobile device protection and security Silverline - DoS/DDoS protection and web application firewalling Websafe - website analysis and malicious traffic traffic detection by the F5 security operations centre (SOC) Websense - URL categorisation and internet risk protection used in combination with the SWG module

▪ ▪ ▪ ▪

The following modules and products are end of life (EoL): ▪ ▪ ▪ ▪ ▪ ▪

ARX (file system load balancing) WebAccelerator (WAM) WAN Optimization Manager (WOM) Message Security Manager (MSM) Protocol Security Manager (PSM) Firepass

Access Policy Manager (APM) Module APM offers a unified, centralised access security solution for applications and networks, at typical TMM scale and performance; up to 3000 logins per second and 1m concurrent users. The module provides an increasing number of features and benefits; ▪ ▪ ▪ ▪ ▪ ▪

Dynamic, policy-based, context-aware access control Central control for diverse users and locations (remote, mobile, LAN and WLAN) Centralised, repeatable and consistent policy application Support for the CRLDP and OCSP dynamic certificate revocation protocols SSL VPN Authentication offload with support for RADIUS, LDAP, MS AD Kerberos, HTTP, RSA SecurID, OAM and TACACS+ authentication methods Single Sign On (SSO) features Java applet rewriting SAML support (from v11.3) Multi-vendor VDI support including VMware View, Citrix XenApp & XenDesktop, Microsoft RDP and Java RDP clients Enterprise Manager and BIG-IQ management High speed logging (HSL) Secure Web Gateway (SWG) integration

▪ ▪ ▪ ▪ ▪ ▪ ▪

61 61


Access Policy Manager is available as an LTM or ASM add-on module for physical and Virtual Editions and VIPRION chassis platforms. It is also available as part of the BIG-IP Edge Gateway remote access product. APM (in particular as part of the Edge Gateway product) is the successor to the FirePass product. APM and LTM or ASM are now the successor to the Edge Gateway product itself. APM also supersedes and vastly improves upon the ‘legacy’ Advanced Client Authentication (ACA) Module although it is still available.

Advanced Firewall Manager (AFM) Module Introduced in early 2013 and available with TMOS v11.3 onwards, AFM simplifies and unifies the configuration and management of the Application Delivery Firewall (ADF) related features of TMOS, TMM and LTM. All relevant features are fully integrated into TMM and therefore provide very high performance; the figures are impressive. The ADF is defined as a combination of the AFM and LTM modules. Other common TMOS, TMM and LTM features and benefits apply and are possibly even more relevant in a security context; ▪

Comprehensive DDoS mitigation features as described in the TMM and LTM chapters (and also including those previously available with the PSM) The full proxy architecture Flexible scaling options and ScaleN Full standard HA feature support Very high throughput and performance TCP Optimisations, reducing response times iRules and data and protocol manipulation Application awareness and context Function consolidation and further integration benefits when used with other modules (particularly ASM, APM and GTM) and features (such as IP Intelligence and Geolocation) AVR/Analytics integration ICSA Network Firewall Certification High speed logging (HSL) SSL Termination VPN Termination

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

This module is available for physical and virtual editions and VIPRION chassis platforms. This LTM add-on Module is dependent on and can only be used in conjunction with LTM.

Application Acceleration Manager (AAM) Core Module The AAM Core module is available for physical and virtual editions and VIPRION chassis platforms and is included with the base LTM license. AAM Core is a subset of the combination of features previously available in the WA and WOM Modules. The Full version, detailed next, provides the full suite of features. Core includes; ▪ ▪ ▪ ▪ ▪

Symmetric Compression Dynamic Compression The SPDY Gateway Feature Bandwidth Controllers HTTP Caching

62 62


▪ ▪ ▪ ▪

HTTP Compression TCP Express OneConnect iSessions

This module is available for physical and virtual editions and VIPRION chassis platforms. This Module is dependent on and can only be used in conjunction with LTM.

Application Acceleration Manager (AAM) Full Module The full AAM module is available for physical and virtual editions and VIPRION chassis platforms. A combination of the previously separately available WA and WOM Modules, AAM provides the full set of features from those products. Features over and above the Core product include; Intelligent Browser Referencing (IBR) – increasing browser cache expiration dates (and other features) to reduce conditional GET requests Image Optimisation – reducing image size to something appropriate to the requesting device Content Reordering – modifying the order of served content to optimise page load times Dynamic caching/deduplication Multi-protocol optimisations (HTTP, FTP, MAPI, UDP) Forward Error Correction (FEC) – provides recovery of lost packets to avoid retransmission and increase throughput on poor networks or links Parking Lot – GET request queuing for expired cache objects MultiConnect – performs client-side link modifications, which, along with additional DNS entries, ‘force’ browsers to open additional connections to a site PDF Dynamic Linearisation A Performance Dashboard Symmetric and Asymmetric deployment options BIG-IP APM, ASM, and AAM layering iApps support Enterprise Manager and BIG-IQ management

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

This module is available for physical and virtual editions and VIPRION chassis platforms. This Module is dependent on and can only be used in conjunction with LTM.

Application Security Manager (ASM) Module ASM (initially based on technology gained through the 2004 acquisition of MagniFire Websystems) provides advanced web application aware ‘firewall’ (WAF) functionality. Unlike most modules it does not run within TMM but the HMS instead and therefore doesn’t benefit directly from typical TMM performance and scale. It provides protection against a wide range of attacks and attack vectors including;

63 63


▪ ▪

Web scraping (the automatic (mass) extraction of data from a website or sites) SQL Injection (execution of SQL code, ‘injected’ via a website or service’s user input methods (such as a form field), on the database backend used by that site’s web servers) Layer seven (aka Application Layer) DoS and DDoS ((distributed) denial of service attacks aimed at application functions) Cross-site scripting (aka XSS) (malicious browser code injection and trusted site permission hijacking) JSON payload attacks FTP Application attacks SMTP Application attacks XML Application attacks

▪ ▪ ▪ ▪ ▪ ▪

Other features include; ▪ ▪

Vulnerability assessment and mitigation Integration with vulnerability scanners from Cenzic Hailstorm, IBM Rational AppScan, QualysGuard Web Application Scanning and WhiteHat Sentinel Session awareness White and black listing Regulatory compliance reporting (PCI for example) An automatic policy-building engine Enterprise Manager and BIG-IQ management WebSockets support

▪ ▪ ▪ ▪ ▪ ▪

Application Security Manager is available on a selection of BIG-IP application switches, as a Virtual Edition and as an LTM add-on module for physical and virtual editions and VIPRION chassis platforms.

Application Visibility and Reporting (AVR) Commonly referred to as simply Analytics or BIG-IP Analytics, this Module provides detailed historical and near-time HTTP and TCP/IP related statistics for iApps applications, Virtual Servers, Pool Members, URLs and even specific countries, allowing for in-depth traffic analysis. The available metrics and counters include transactions per second, server latency, page load time, request and response throughput, sessions, response codes, user agents, HTTP methods, countries, and IP addresses. Fine grained filters can be used to limit what is recorded, full transaction and data capture is possible and alerts (via SNMP trap, email or syslog) can be configured based on user defined thresholds. Remote logging of statistics data is also supported but unfortunately data cannot be collect via SNMP polling or iControl. IPv6 is fully supported from v11.1. Enterprise Manager can be used as a centralised Analytics reporting tool if required. Analytics is available as an LTM add-on feature for physical and virtual editions and VIPRION chassis platforms and is included with the base LTM license. This wasn’t always the case. This module is dependent on and can only be used in conjunction with LTM and needs to be provisioned as Nominal.

64 64


BIG-IQ Centralised Management Product Planned as the eventual successor to Enterprise Manager, BIG-IQ is a management and orchestration platform with considerable scope. As with any centralised management system, the main goal is to reduce operational costs, reduce administrative overheads and improve scalability. Currently BIG-IQ has four main components each focused on specific functional areas; Access, Devices, Traffic and Security. The following modules and services are supported; ▪ ▪ ▪ ▪ ▪ ▪

AFM APM ASM LTM MobileSafe WebSafe

General features include; ▪ ▪ ▪ ▪ ▪

A comprehensive set of RESTful APIs So-called ‘single pane of glass’ management Centralised audit and control License management of BIG-IP Virtual Editions Role based access control (RBAC)

Here’s a brief overview of each component; Access Management of up to 100 APM devices including; ▪ ▪ ▪

Policy verification, staging, auditing and monitoring Multi-device policy push Extensive reporting

Devices Centralised management of up to 200 physical, virtual or vCMP BIG-IP appliances, including; ▪ ▪ ▪ ▪ ▪ ▪

TMOS Software deployment Remote deployment of appliances hosted within VMware NSX, Cisco APIC, OpenStack or AWS Centralised license management of up to 5000 unmanaged devices for highly flexible provisioning Status and usage reporting including SSL certificate status Device discovery and monitoring Configuration backup and restore

65 65


Traffic Management and real-time monitoring of LTM configurations and objects including; ▪ ▪ ▪ ▪ ▪

RBAC For pool member and virtual server control Centralised logging and audit trails Configuration templating, staging and scheduling Virtual server cloning Health and statistics monitoring

Security Centralised AFM and ASM management including; ▪ ▪ ▪ ▪ ▪ ▪

RBAC For security instances Policy verification, staging, auditing and monitoring Multi-device policy push Rule monitoring, reporting and prioritisation Configuration snapshots Reporting and security alerts, including for WebSafe and MobileSafe

BIG-IQ is available as a standalone appliance and a virtual edition. It supports and can manage all hardware and virtual appliances running TMOS v11.4 and above including VIPRION.

BIG-IQ Cloud & Orchestration Product Orchestration of BIG-IP deployments in public and private clouds, with integration support for; ▪ ▪ ▪ ▪

Cisco APIC Amazon Web Services (AWS) OpenStack VMware environments including NSX

Additional features include; ▪ ▪ ▪ ▪ ▪

Automatic provisioning Dynamic application server ‘bursting’ Tenant awareness and service catalogue provision iApps management, provision and templating Health and performance monitoring

Carrier Grade NAT (CGNAT) Module Introduced with v11.3 this Service Provider focused module provides highly optimised, available and scalable IPv4 and IPv6 Network Address Translation (NAT) and related features such as NAT44, NAT64, DNS64, DS-Lite, endpoint independent mapping, endpoint independent filtering and deterministic NAT. A number of the Module’s features rely on existing TMOS or LTM features such as HA, High-speed Logging (HSL), the full proxy architecture for translating or migrating between IPv4 and IPv6 objects and TCP Express. CGNAT is available as an LTM add-on module for physical and virtual editions and VIPRION chassis platforms.

66 66


Edge Gateway Product Edge Gateway was available as a virtual edition and on a selection of BIG-IP application switches but not on VIPRION chassis platforms. It is a combination of the APM, WA and WOM modules, providing secure remote access (RAS) gateway features such as; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

ICSA Certified SSL VPN Clientless access End point validation and security and access policy enforcement Single Sign On (SSO) and credential caching Multi-factor authentication Symmetric acceleration (if the client is using the Edge Client software) Wide AAA protocol support Wide remote access protocol support (Citrix, RDP, ActiveSync etc.) IPv6 Support Enterprise Manager Management

Enterprise Manager (EM) Product I have to admit that large scale management and monitoring bore me rigid; I blame this on the incumbent vendors happy to milk the cash cow rather than innovate and please their customers. I’ve actually used Enterprise Manager (v2.x) and whilst I’m unlikely to describe it as exciting it’s certainly an improvement over other so-called solutions I’ve seen and it is very focused. Enterprise Manager has numerous features and benefits; ▪ ▪

Aids with scaling up Improves device, application and service visibility and therefore troubleshooting capabilities and capacity planning and forecasting accuracy, as with other centralised management solutions Reduces cost and complexity Automates common tasks including device configuration backups, ASM policy deployments and reporting Custom Alerts and thresholds Manages and eases; o Device inventory tasks o Service contract monitoring o SSL TPS monitoring and certificate management Centralised configuration management including comprehensive search Allows for the use of configuration templates Granular (distributed) configuration management Uses a local or remote MySQL database allowing enterprise integration and high compatibility with various DB management and reporting tools Physical and virtual edition support for LTM, GTM, ASM, LC, AAM, APM and Edge Gateway

▪ ▪ ▪ ▪

▪ ▪ ▪ ▪ ▪

EM is available as a standalone appliance and a virtual edition. It supports and can manage all hardware appliances including VIPRION and Virtual Editions.

EM is very likely to be phased out and replaced by the BIG-IQ Device product.

DNS (formerly Global Traffic Manager (GTM)) Module Global Traffic Manager is a TMOS Module and is part of the core, long standing F5 product set. GTM primarily provides DNS based ‘global’ server load balancing (GSLB) for IPv4 and IPv6 (inter-Data Centre) rather than LTM’s

67 67


intended intra-Data Centre operation). In order to make this Module a more attractive proposition, its feature set has been significantly expanded since 2012 it now runs in TMM natively, rather than within the HMS. The considerable list of features and benefits include; ▪ ▪

Global server load balancing (using DNS to direct traffic between multiple DCs) Dynamic ratio load balancing (load balancing based on weights derived from Node metrics such as CPU and memory usage) Wide area persistence (DNS response persistence, a same client will get the same response and load balancing will be ignored unless/until a timeout is reached) Geographic load balancing (load balancing a client to its geographically closest DC) Advanced health monitoring QoS Awareness DNS Security Extensions (DNSSEC) support (including rate limiting and centralised key management) Up to 10 million DNS responses per second using the VIPRION platform DNS Caching DNS Server consolidation and offload DNS DDoS and Local DNS (LDNS) cache poisoning protection DNS server load balancing (similar to LTM server load balancing) Not BIND based and therefore not subject to BIND security vulnerabilities Protocol inspection and validation DNS record type ACLs IP Anycast support IPv6 support

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

GTM is available as a standalone appliance, a virtual edition and an LTM add-on module for physical and Virtual Editions and on VIPRION chassis platforms. DNS Services are also available as an LTM add-on Feature Set.

IP Intelligence Service This subscription-based service is designed to be used in conjunction with ASM or LTM to block malicious traffic at the very edge of your network, thus increasing efficiency by avoiding processing overheads further within your infrastructure. The service provides a constantly updated database of IP addresses known to be used in relation to activities such as; ▪ ▪ ▪ ▪ ▪

Phishing sites and other fraudulent activity DoS, DDoS, SYN flood and other anomalous traffic attacks Botnet command and control servers and infected zombie machines Proxy and anonymisation services Probes, host scans, domain scans and password brute force attacks

This database can then be referenced by iRules to allow for automated blocking, allowing for context aware policy decisions.

68 68


Link Controller Product (& Module) LC Provides features to manage, aggregate and monitor multiple ISP internet connections (links) and controls the traffic flow across them, based on multiple dynamic factors and user specified criteria. Traffic optimisation and prioritisation features are also available to improve application performance. TCPExpress, IPv6, iRules and SNAT are fully supported and there is an optional compression feature. BIG-IP Link Controller is available as a standalone version and as a LTM add-on module for BIG-IP application switches.

MobileSafe Product & Service This enterprise level product aims to protect and secure corporate mobile devices from various threats and ensure the company, it’s networks and its data are protected. The software is available for iOS and Android devices, with management achieved through a web portal run by the F5 Security Operations Center (SOC). Features include; ▪

Mitigates against various mobile device threats including; application tampering, unpatched operating systems, keyloggers, certificate forging and DNS spoofing Strong validation of SSL certificates Application-level encryption Malware detection Rooted and jail-broken device detection

▪ ▪ ▪ ▪

Policy Enforcement Manager (PEM) Module Available from TMOS v11.3, PEM provides mobile network subscriber and traffic reporting, management and control. The module provides a host of features and benefits, presumably based on the assets of the Traffix Systems acquisition; ▪ ▪

Comprehensive analytics including per session and per application statistics L7 Intelligent traffic steering (to appropriate caches, CDNs, proxies) and bandwidth control to reduce network congestion and increase performance Traffic classification (p2p, VoIP, Web, streaming) Deep packet inspection Rate limiting, QoS, CoS and fair usage policy enforcement Charging system integration (PCRF, OCS) 3GPP standards based Subscriber awareness (IP address, IMSI, RADIUS data, Gx and/or mobile tower) and application context Function consolidation and further integration benefits when used with other modules (particularly CGNAT and AFM) Very high throughput and performance TCP Optimisations, reducing response times iRules and data and protocol manipulation Flexible scaling options and ScaleN Full standard HA feature support High speed logging (HSL)

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Policy Enforcement Manager is available only as a standalone appliance on high-end physical appliance a virtual edition and VIPRION chassis platforms.

69 69


Secure Web Gateway (SWG) Module & Websense Cloud-based Service SWG Provides control, security and management of inbound and outbound user driven web traffic; it’s effectively a secure internet proxy, or web access gateway as F5 like to call it. The module itself provides integration between Access Policy Manager and Cloud-based Websense security services and updates. Combined, these components offer; URL categorisation and filtering ▪ ▪ ▪ ▪ ▪ ▪ ▪

User tracking Malware protection Endpoint integrity checking Policy-based blocking Real-time threat intelligence Detailed logging Splunk reporting

Silverline Cloud-based Service The Silverline service (Software as a Service or SaaS) delivers two core internet related security functions; DDoS protection and web application firewalling. Rather than implement these yourself on-site, you can simply transparently route your inbound traffic through the F5 SOC and let them do the hard work for you. The services are as follows; ▪

F5 Silverline DDoS Protection - typical TMOS supported DDoS protection and features, along with the resources and bandwidth required to sustain a high volume attack.

F5 Silverline Web Application Firewall - ASM features (see the earlier section), along with the processing resources and bandwidth required to mitigate attacks.

WebSafe Service & Module The WebSafe service provides protection for the users and customers using your website properties, as well as the sites themselves. Traffic is transparently passed through the F5 SOC where it is analyzed and malicious traffic dropped before it reaches your site. Additionally, the BIG-IP module component of this service, the Fraud Protection Service (FPS) provides additional features and protections at the local, Virtual Server level. This is fully integrated into the GUI from TMOS v11.6. The protection and features provided by this combination include; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Malware prevention Phishing and pharming attack mitigation Fraud detection and prevention Application-level encryption Transaction monitoring, analysis and integrity checking Device and behavior analysis Integration with MobileSafe Incident reporting Real-time alerts dashboard

70 70


DDoS Hybrid Defender (Herculon) A sophisticated, high performance and throughput security appliance, software and service bundle designed to defend against multi-vector network and application attacks. Only available in physical form on i10800, i5800 and i2800 appliances which provide TurboFlex FPGA driven performance enhancements and acceleration. Herculon products have a unique, simplified interface and configuration requirements. Features include; Backed by F5’s Security Operations Center (SOC), Platform Security Team, Security Incident Response Team (SIRT) and 24x7 customer support Full SSL decryption Anti-bot capabilities Advanced detection methods Line rate capabilities Cloud-based volumetric attack prevention Traffic baselining and automatic configuration Multiple attack mitigation mechanisms Threat intelligence Granular reporting and visibility

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

SSL Orchestrator (Herculon) A security appliance, designed to provide security devices with visibility of SSL/TLS traffic. Since more organisations and companies are transitioning over to SSL/TLS, this will render existing security devices (such as IPS, Anti-Virus etc.) useless if they cannot decrypt and review the payload of the packets. If you install the SSL Orchestrator in front of your existing security devices, you can decrypt the traffic, create policy-based flows and steer unencrypted traffic to your security devices making them useful again. Only available in physical form on i10800, i5800 and i2800 appliances which provide TurboFlex FPGA driven performance enhancements and acceleration resulting in very high performance and throughput. Herculon products have a unique, simplified interface and configuration requirements. Features include; Backed by F5’s Security Operations Center (SOC), Platform Security Team, Security Incident Response Team (SIRT) and 24x7 customer support Inline layer 3, inline layer 2, ICAP services and receive only modes Reverse and forward proxy operation Dynamic service chaining, monitoring and load balancing Context and identification services including geo-location, IP reputation and URL categorisation

▪ ▪ ▪ ▪ ▪

Free and/or Open Source Products F5 publish and maintain a number of open source software packages and resources to help their customers manage their appliances (virtual or physical), modules and application services and integrate them into their wider environment as well as automate their deployment and administration. For anything involving source code you’ll probably find a repository for it on the main F5 GitHub company page here: https://github.com/F5Networks, or, for more informal work on the DevCentral Github page here: https://github.com/f5devcentral. Here’s a quick overview of the most popular or significant projects you’ll find there and also what’s available elsewhere.

71 71


Bigsuds Bigsuds is a Python library designed to make it easy to create programs and automate operations on F5 devices utilising the ‘legacy’ iControl SOAP API. https://github.com/F5Networks/bigsuds.

iControl REST Software Development Kit (F5-SDK) This is the equivalent Python library designed to make it easy to create programs and automate operations on F5 devices utilising the more recent iControl REST API. https://github.com/F5Networks/f5-common-python

Ansible Ansible is a popular and simple Python-based IT automation engine. Over 20 LTM and GTM related modules are available to support automated configuration of device settings, from NTP to SNAT Pools. You can find out more here: https://www.ansible.com/ansible-f5 and the latest list of stable modules (for download if required) is here: https://github.com/ansible/ansible/tree/devel/lib/ansible/modules/network/f5. Most of the modules required either the F5-SDK or bigsuds to be installed on the host running Ansible.

Containers F5 have released a number of container based applications to allow for BIG-IP product integration with container orchestration systems. These include the F5 BIG-IP Controller for Kubernetes, Cloud Foundry and Marathon. F5 have also developed the container-based Application Services Proxy (ASP) which acts as a proxy and load balancer for distributed applications running in containerised environments. You’ll find all of these container images on Docker Hub here: https://hub.docker.com/u/f5networks/ and can find out more about the ASP here: http://clouddocs.f5.com/products/asp/v1.0/.

OpenStack OpenStack is a set of free and open-source software tools that provide a (cloud) computing platform typically deployed as infrastructure-as-a-service (IaaS). It is used to build, operate and manage pools of compute, storage, and networking resources upon which applications and services are run. F5 provide a fair number of drivers, plugins and agents to allow for BIG-IP product integration with various OpenStack components and to enable orchestration abilities.

72 72


Cloud - AWS F5 offer supported and experimental CloudFormation templates to ease deployment of Virtual Edition EC2 instances AWS in a declarative and repeatable manner. CloudFormation is AWS proprietary and specific, unlike Terraform which is detailed next. You can find the templates here: https://github.com/F5Networks/f5-aws-cloudformation. Terraform provides the same services as CloudFormation but can be used with the top three cloud platforms and many other platforms, known as providers. You can find configurations for AWS, Azure and Google Cloud Platform (GCP) providers and additional resources here: https://github.com/f5devcentral/f5-terraform.

Cloud - Azure You can find F5 formulated Azure ARM templates here: https://github.com/F5Networks/f5-azure-arm-templates. You’ll find Terraform Azure provider configurations and many others as well as additional resources here: https://github.com/f5devcentral/f5-terraform.

Cloud - GCP You can find F5 formulated GCP GDM templates here: https://github.com/F5Networks/f5-google-gdm-templates. You’ll find Terraform GCP provider configurations and many others as well as additional resources here: https://github.com/f5devcentral/f5-terraform.

The Full Application Proxy The first release of TMOS, v9 in 2005 introduced the Full Application Proxy; providing a significant improvement in functionality over the prior Packet Based Proxy architecture used in previous products. The Packet Based Proxy is still available and can still be the most desirable, high performance solution where only L2-L4 functions are required. The Full Application Proxy architecture is just that; it functions as a proxy that fully and completely separates the client and server sides of a connection. There are in fact two connections; the client side connection is terminated on the proxy (the load balancer) and a new, separate connection is established to the server. The proxy acts in the role of server to the client and client to the real server. There are two related connection table entries too; one for client side, one for server side. Each can have independent parameters applied, such as idle timeouts, buffers, MTU, window size and so on. The Application Delivery Controller offers many different functions including: ▪

Host Monitoring - The BIG-IP system is constantly monitoring the status of each ‘real’ server host. In the upcoming diagram, the offline server (server 2) will not be used or considered in a load balancing decision seeing its monitor is failing.

Load Balancing - The BIG-IP system will load balance the traffic and make a decision on which real server will receive each request. In our upcoming diagram we are using Round Robin as the load balancing algorithm.

The following diagram demonstrates this full proxy functionality in respect to the TCP/IP connections;

73 73


1. 2. 3.

4.

The client establishes a TCP connection to a virtual IP address hosted on the BIG-IP system and sends a HTTP GET request. The client TCP connection is terminated on the BIG-IP system. Once the client connection has been established, the BIG-IP system will make a load balancing decision and choose one of the three servers. Seeing that Server 2 is currently being marked as offline, it will choose between Server 1 and Server 3. When it has chosen a ‘real’ server, the BIG-IP system will establish a new TCP connection and send the HTTP GET request to the ‘real’ server. The BIG-IP system’s TCP connections will be terminated on the ‘real’ server and each of the client’s GET requests will be processed and responded to.

This happens for each new connection being sent to the BIG-IP system when using the Full Proxy architecture and it allows for a huge number of features and functions to be dynamically applied to each connection separately, as well as the inspection, manipulation and modification of application layer data. This architecture provides the foundation for many of the advanced features described in this book (as well as many, many more that are not) such as; iRules (working above OSI Model layer four), advanced Persistence methods, SSL offload, TCP Optimisations and HTTP Compression, Caching and Pipelining.

74 74


If you don’t actually require any of these features or the benefits of two independent connections, then using the Packet Based Proxy is probably preferable as it is simpler and will provide even higher performance. Note in some documentation and other materials published by F5 the Full Application Proxy is sometimes referred to as the Fast Application Proxy.

The Packet Based FastL4 Proxy A Packet Based Proxy architecture is what was employed in the first generation of load balancers and generally only operates up to OSI Model layer four, the transport layer. Sometimes referred to as a Half Proxy, there is only a single connection which the load balancer modifies the TCP/IP parameters of, without the client or server being aware. The half proxy does not act as either a client or server from a TCP/IP perspective. The actual connection state and flow of packets is generally not controlled in any way. The following diagram demonstrates this half proxy functionality in respect to the TCP/IP connection;

75 75


1. 2. 3.

Client establishes a TCP connection to a virtual IP address (VIP) hosted on the Application Delivery Controller (ADC) and sends a HTTP GET request. The client TCP connection are NOT terminated on the ADC. Only the destination IP address and TCP/IP parameters. The client TCP connections is terminated by the relevant real server and each client GET request is processed and responded to.

Unlike with the Full Application Proxy, the advanced features described in this book (as well as many, many more that are not) such as; iRules (working above OSI Model layer four), advanced Persistence methods, SSL offload, TCP Optimisations and HTTP Compression, Caching and Pipelining are not available with the Packet Based Proxy. Even though a Packet Based Proxy operates up to layer four, the Full Application Proxy still provides some advantages over it even at this layer, due to its use of separate client and server-side connections and the resulting ability to modify and control separate parameters for each.

The lines between the half and full proxy can sometimes get rather blurry as one obviously evolved from the other resulting in features that can be common to both. In the most simplistic terms, the half proxy does not act as a TCP/IP client or server; it operates transparently with the single connection established between the real client and server. The full proxy acts as a TCP/IP server to the client and client to the real server; it terminates the first and initiates the second and thus there are two independent connections.

OneConnect Also known as Connection Pooling, the OneConnect feature minimises the number of server-side connections by reusing previously established connections for subsequent client requests. Rather than closing an idle connection to a real server (Pool Member) and reopening a new one for the next client request that gets load balanced to that server, the connection is maintained and re-used, within user configurable limits. This is demonstrated in the following diagram.

76 76


1. 2. 3.

77 77

Client 1 establishes a TCP connection to a virtual IP address hosted on the BIG-IP system and sends a HTTP GET request. The BIG-IP system establishes a new TCP connection to the end server and sends the HTTP GET request. When a new request arrive at the BIG-IP from for instance Client 3 or Client 4, it will load balance the request to an end-server. Once the load balancing algorithm has chosen an end-server, the BIG-IP will review its own connection table to see if it contains any stale TCP session to the end-server which is currently not being used by a client. If it finds one, it will use that TCP session and directly send the GET request to the end-server.


2. The TMOS Administrator Exam The TMOS Administrator exam is the second within the F5 Professional Certification Program and is based on TMOS v11.4. Passing this exam is a prerequisite for all further certifications and exams. Passing the Application Delivery Fundamentals 101 exam is a prerequisite to taking this one. In this chapter we’ll discuss the wider Professional Certification Program and detail additional resources that you might find useful as you work through this guide and plan for the exam.

The F5 Professional Certification Program The F5 Professional Certification Program (F5-PCP), as it is now known, has been undergoing radical transformation since the second half of 2012. Prior to this transformation, there were a limited set of exam subjects at two certification levels. With the new program there are now three levels of certification and four levels of exams or labs (there’s a difference as the first level exam does not result in any certification or credential award.) All four of the exam levels (three certification levels) are shown in the following table; Exam Level 101

Exam Name Application Delivery Fundamentals

Certification Level None

201

TMOS Administration

C1: F5 Certified BIG-IP Administrator (F5-CA)

301a

LTM Specialist: Architect, Set-up & Deploy

None

301b 302

LTM Specialist: Maintain & Troubleshoot BIG-IP DNS Specialist

C2: F5 Certified Technology Specialist (F5-CTS) LTM C2: F5 Certified Technology Specialist (F5-CTS) BIG-IP DNS

303

ASM Specialist

C2: F5 Certified Technology Specialist (F5-CTS) ASM

304

APM Specialist

C2: F5 Certified Technology Specialist (F5-CTS) APM

78 78

Skillset ▪ Basic network, protocol ▪ ADC concepts and operation ▪ TMOS architecture and modules ▪ Basic troubleshooting ▪ Day to day maintenance and management of devices and configuration objects ▪ Architect ▪ Setup ▪ Deploy ▪ Maintain ▪ Troubleshoot ▪ DNS administration ▪ GSLB, ▪ Multiple data centres ▪ Configuration and administration ▪ Web application security and operation ▪ Configuration ▪ Administration ▪ RAS, AAA & SSL VPN configuration ▪ Administration


401

Security Solution Expert

C3: F5 Certified Solution Expert (F5-CSE, Security)

402

Cloud Solution Expert

C3: F5 Certified Solution Expert (F5-CSE, Cloud)

▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

LTM GTM ASM APM AFM BIG-IQ modules IP Intelligence (IPI), WebSafe and MobileSafe LTM GTM BIG-IP Cloud Edition Automation Cloud Platforms OpenStack RedHat® OpenShift Kubernetes© F5 Container Connector F5 Application Connector RestAPI

In order to achieve the certification F5 Certified Technology Specialist (F5-CTS) LTM you will need to pass both the 301a and 301b. As you’ll know, the first exam doesn’t result in a certification; this is designed to encourage greater candidate commitment and deter ‘casual’ candidates who might normally take an ‘easy’ entry level exam simply to bulk out their CV. This, along with the wider network, protocol and application knowledge requirements increase the value and quality of the program and hopefully reduce the likelihood of accelerated training programs being formulated. Further information on the PCP can be found here: https://f5.com/certification.

Why Become Certified? Before embarking on any certification path, this is a worthwhile question to ask of yourself. There are many benefits to certification (and debate on the entire subject) but most of them must be qualified based on factors such as; which vendor, the program’s reputation, the employment market, employer attitudes, certification relevance and more. Remember that most vendors will make money from a certification program regardless of its benefits or value to you. Also keep in mind that a certification doesn’t prove you are competent. Here is our view on the typical benefits; ▪

Certification involves study, learning and the acquisition of knowledge – these are all good things but remember you’ll learn and benefit more if you go for something that isn’t an everyday part of your job. It’s still of benefit to certify in skills you already possess, and this will help fill any gaps in your knowledge, but studying something outside of your everyday will be more rewarding and hopefully open more avenues of opportunity in the future, especially if you chose something in demand or likely to be soon.

Certification will improve your understanding, knowledge and self-confidence.

79 79


Certification proves to others you can study, read, take notes, work alone, follow through, do research and organise yourself in general – assuming it hasn’t taken too long to achieve and isn’t considered an easy certification.

Certification can help you keep your job, gain a pay rise or a promotion although what you choose, and its perceived value will be critical here.

Certification gives you an advantage over other candidates without it although, again, what you choose, and its perceived value will be critical.

Choosing a Certification All of the benefits detailed previously will vary in ‘weight’ depending on the certification program (or programs) you chose to embark on. When deciding, you should consider the following; ▪

Forget the vendor; will you learn something useful about technology, skills that you can use even if the vendor went out of business? Does the certification carry any weight in the market, how is it perceived by employers/hirers? Do too many people have it? Is this certification alone good enough to achieve your goals? Is there demand for the certified skills? What benefits of certification does the vendor provide, if any?

▪ ▪ ▪ ▪ ▪

Getting Started If you’ve passed the Application Delivery Fundamentals 101 exam, which is a prerequisite to this one, you should already know this but in case you need a reminder, we’ll repeat it here. You should take a quick look at the certification pages on the F5 website that can be found here: https://f5.com/certification. A getting started page can be found here: K93611383: F5 certification | Introduction and very useful program policies can be found here: K90101564: F5 certification | Policies and program details. Along with the overview provided at the start of this chapter this should tell you all you need to know about the certification program. You should already have one but if not, register for an F5.com account here: https://login.f5.com/resource/registerEmail.jsp which will give you access to a number of resources exclusive to registered users. Equally, a DevCentral account will also be very useful and provides access to F5’s community support and documentation site. Register here: https://devcentral.f5.com/register. You can also follow @F5Certified on Twitter and join the very active LinkedIn group here: http://www.linkedin.com/groups?home=&gid=85832.

80 80


Taking Exams As you should know, you must register with the F5 PCP here: http://certification.f5.com/ in order to be eligible to take this or any other exam and book it through Pearson VUE. The number of questions, time allowed, and passing score are provided when you book the exam. You can also find the passing score here: K29900360: F5 certification | Exams and study materials. Exams are typically $135 USD in the Americas, $145 USD in EMEA, and $155 USD in APAC and normally last at least 90 minutes. You must wait at least 15 days before you can retake a failed exam the first time, 30 days the second time, 45 days the third time and finally a full year the fourth time. You have to wait a full year before you can attempt an exam for the fifth time to decrease the possibility of cheating and question recording. The extended delay ensures you face a rewritten exam as exams are updated every two years. Certifications expire after two years; re-certifying your highest certification achieved recertifies all lower level certifications, as is the norm for most certification programs. Note that F5 Training courses only cover the F5-specific elements of each exam as you are expected to already have (or gain) knowledge and experience of general networking and network and application protocols. Don’t worry, this book, of course, covers everything.

Additional Resources The following will be of particular interest to students studying for this exam;

Practice Exams F5 now offer official practice exams to help you accurately predict your likely performance on the live, production exams. The practice exams are designed to mimic the real tests which means you will receive 80 questions to answer within 90 minutes. As of this book’s writing the exams cost $25 for a single test which needs to be attempted within 30 days or $40 for two tests that need to be attempted within 90 days. To take a practice test visit https://portal-v5.examstudio.com/Default.aspx?id=20882 and log on using your candidate credentials.

Additional Study Material Official study guides and exam descriptions can be found in this AskF5 article: K29900360: F5 certification | Exams and study materials.

AskF5 Available at: https://support.f5.com/ (previously https://ask.f5.com/ which still works too) AskF5 is the F5 Networks technical knowledge base and self-service online support site – no account is required. AskF5 provides knowledge base articles related to; support, known issues, solutions, best practises and security advisories. You can also obtain release notes, manuals and guides.

81 81


DevCentral F5 DevCentral (DC), available here: https://devcentral.f5.com/ is a community website featuring forums, blogs, tech tips, wikis, code sharing for iRules, iControl, iApps templates, tutorials and more. An account is required to access some content or contribute. Created as CodeShare in 2003 by Joe Pruitt, the architect of iControl (Joe is still with the company) DevCentral now has over 250,000 members in 191 countries. Membership grew over 25% in 2012 alone.

F5 University Free, self-paced web-based training related to basic technologies and concepts, changes in new software versions and basic LTM configuration are available via the F5 University available here: https://university.f5.com/. An F5 support account is required to access the site. You can also gain lab access to an F5 running TMOS v11.4.0 (plus two Linux hosts) for two hours at a time; an invaluable tool for those without access to their own device.

Exam Blueprints These can be found on the F5.com website and in the Downloads section of the CMS and provide a comprehensive list of the exam objectives and the skills and knowledge required to pass the exam. The blueprint for this exam can be found here: http://www.f5.com/pdf/certification/exams/F5_blueprinttemplate_TMOS_v2.pdf.

BIG-IP LTM Virtual Edition (VE) Trial An LTM VE 90 Day Trial can be obtained from here: https://www.f5.com/trial/big-ip-trial.php - you’ll need an F5.com account to obtain it. You’ve probably already got one right and if not, it’ll be useful going forward. Unfortunately, the trial is for TMOS v12.1 which is slightly ahead of the TMOS version the exam is based upon.

BIG-IP VE Lab Edition You can now purchase the latest BIG-IP VE Lab Edition for the very, very cheap price of $95 (it used to be around $2000). It’s limited to 10Mb total throughput but includes LTM, GTM (DNS), AAM, AFM, APM (10 user limit), ASM. It’s an incredibly cost effective tool for getting hands on experience using F5’s products, lab testing and building an understanding of how things work and interact. Unlike with the 101 exam, this one does require some practical knowledge of actually using or configuring BIG-IP. You can request a license here: https://www.f5.com/trial/.

82 82


BIG-IP VE on Amazon Web Services (AWS) It takes more time, effort and research to get started but I can highly recommend AWS as an alternative to the VE Trial and Lab Editions, especially if you don’t have a lab server or powerful PC/laptop with the right software. As an added benefit you also get to learn about and gain practical experience with AWS (and the cloud) itself. The recently introduced Free Usage Tier (details here: http://aws.amazon.com/free/) makes building a small, private lab environment very cheap. You can create a Virtual Private Cloud (VPC) and a number of EC2 Linux server ‘micro instances’ for purposes such as running a web server or other services, all for free. Then you just need to add an LTM VE EC2 instance. It isn’t free, but you can create and run one, charged hourly, with any of the Good, Better or Best license bundles, at a very low cost. Those costs are constantly changing and depend on a number of factors (including taxes) but to give you an example, I can run a VE with the Good license for around $0.50 an hour. You only need to run your instances (and thus only get charged) as and when you need to. Of course, there is a steep learning curve to overcome but this is a very worthwhile option if your budget is limited and you have no other way to gain access to a device.

Other Clouds BIG-IP VE is also now available on Microsoft® Azure™ and Google© Cloud Platform™. All the benefits detailed above for AWS generally apply although it could be debated that as AWS is the most prevalent and popular cloud provider, any cloud related skills you gain are potentially more valuable.

83 83


3. Building Your Own Lab Environment A practical understanding of TMOS and LTM is an essential ingredient in any exam-passing formula. Whilst most readers of this book will hopefully have practical experience of F5 administration in a live, production environment, it’s rare any form of experimentation is possible. To that end, a so-called ‘lab’ that allows for this and more, without risk, is a must.

Fel! Bok There are a multitude of services and products that a useful lab can be created with and obviously we can’t cover them mär all. In this book we’ll only cover one that is free for non-commercial use and does not require any aditional expensive ket equipment. The lab environment should have no problem running on your PC or laptop as long as the following requirements are fulfilled: är inte CPU: ▪ The host system must have a 64-bit x86 CPU with 1.3 GHz or faster core speed. Multiprocessor systems are defi supported. nier ▪ An AMD CPU that has AMD-V support ▪ An Intel CPU that has VT-x support at. Memory: ▪ To run all of the machines for the lab you will need 4GB of memory. Disk: ▪

To run all of the machines for the lab you will need 30GB worth of diskspace.

The virtual machines are configured with more than 30GB but they are thin provisioned and will only occupy more diskspace when the virtual machines needs it.

Obtaining the Different Components to Build Your Lab In order to make this easier for you as the reader to build your own lab environment, we have created a webpage that contains mirrors, instructions and links to each component necessary to build your lab environment. This web page is located at: https://www.f5books.eu/building-your-own-lab/. Please visit this site to download all necessary components before you start building your own lab environment.

VMware Workstation Player™ The Hypervisor used in these lab exercises is VMware Workstation Player. The reason we have used this hypervisor instead of other open-source alternatives is simply because the F5 Virtual Machine is not fully compliant with some of the open-source hypervisors currently available. To obtain your VMware Workstation Player go to http://www.vmware.com/products/player/playerpro-evaluation.html This hypervisor is free of charge for non-commercial use and will work natively with the BIG-IP VE.

84 84


BIG-IP VE Trial Evaluation Key First you will have to obtain a BIG-IP VE 90 Day Trial Evaluation Key by filling out a trial evaluation form located at: https://downloads.f5.com/trial/secure/generate-eval-key.php?product=ltmve. We also keep an updated instruction and set of links on: https://www.f5books.eu/building-your-own-lab/. In order to fill out the form you will first have to register for a free F5 account. If you already have one, you simply have to log in and follow the instructions. When done, you should receive an email with the Base Registration Keys, therefore, verify that the email address assigned to your account is correct. The email will also contain links to download the ESXi image but ignore this as we require a specific version for our lab exercises.

Downloading the BIG-IP VE Machine After you have received your base registration key, download your BIG-IP® Virtual Edition (VE) from F5. In order to give you the correct user experience, we require you to download BIG-IP version 12.1.2. For instructions on how to download this version, please visit https://www.f5books.eu/building-your-own-lab/.

BIG-IP VE Lab Edition You can now purchase the latest BIG-IP VE Lab Edition for the very, very cheap price of $95 (it used to be around $2000). It’s limited to 10Mb total throughput but includes LTM, DNS (formerly GTM), APM (10 user limit) AFM, ASM, AVR, PSM and AAM. It’s an incredibly cost effective tool for getting hands-on experience using F5 products, testing and building an understanding of how things work and interact.

The Lab Architecture So, what are we building? It’s pretty simple yet covers all our needs, where the 201 syllabus is concerned at least. These are the computing components; ▪

A Linux client running Lubuntu, allowing the use of CLI and GUI based tools and software such as ping, PuTTY (SSH), Filezilla (FTP) and a web browser.

A BIG-IP VE running version 12.1.2

A Linux server running the Tomcat Apache web server, configured with five virtual hosts, listening on different IP addresses and TCP ports.

We’ll be using these networks; ▪

Management - used for configuring the BIG-IP.

External - the client-side network connecting the Linux client and BIG-IP; this will be a private (internal) network.

Internal - the server-side network connecting the BIG-IP and Linux server; this will be another private (internal) network

85 85


Here’s a diagram to help you visualise the end state:

In the end of this chapter you will find the instructions for how to configure your lab environment.

86 86


Lab Exercises: Setting up Your Lab Environment Exercise 1.1 – Installing VMware Workstation Player Exercise Summary In this exercise, we’ll install VMware Workstation Player that will act as our hypervisor to build up our lab environment. Simply put, a hypervisor is a platform which creates and runs virtual machines. VMware Workstation Player is free for non-commercial, personal and home use which perfectly suits our purpose as a personal lab environment.

Exercise Prerequisites Before you start this lab exercise make sure you have the following: The installation file for VMware Workstation Player. The version we’ll use in this exercise is v12.5. A machine which meets the system requirements detailed earlier running either Windows or Linux.

▪ ▪

Running the installation file 1.

Run the installation file VMware Workstation Player. This will start the installation wizard. Click Next to proceed to the next page.

2.

On the End-User License Agreement page, after reading the terms (should you wish to), check the I accept the terms in the License Agreement box and click Next to proceed to the next page.

87 87


3.

On the Custom Setup page, if you need to change the installation path for VMware Workstation Player click Change. If not then click Next to proceed to the next page.

4.

On the User Experience Settings page you will have the options to choose if the program should automatically look for updates each startup and send anonymous system data and statistical information to VMware. These settings are completely optional and will not affect the lab. You may choose on how you would like to proceed.

88 88


5.

On the Shortcuts page you will have the options to add shortcuts on the Desktop and/or the Start Menu Programs Folder. As with the previous step, this is entirely optional and will not affect the lab. You may choose on how you would like to proceed.

6.

On the Ready to install VMware Workstation 12 Player page, click Install to start the installation of program.

89 89


7.

90 90

Once the installation is finished you will be presented with the Completed the VMware Workstation 12 Player Setup Wizard. On this page, click Finished to end the setup wizard.


Exercise 1.2 – Importing the Virtual Machines into VMware Workstation Player Exercise Summary In this exercise we’ll proceed with importing the virtual machines necessary for the lab environment. The virtual machines are in an *.ova format which means that you can simply import them and with very little adjustments have a functioning lab environment.

Exercise Prerequisites Before you start this lab exercise make sure you have the following: ▪

Successfully installed VMware Workstation Player

Obtaining a BIG-IP VE Trial Evaluation Key At the time of writing, F5 offers a 90-day free trial edition of the BIG-IP VE. To access this trial edition, you will have to create an F5 account. This is free of charge and anyone can register for an account. 1. 2.

3. 4.

To get started with the BIG-IP VE Free Trial, go to the following web page: a. https://downloads.f5.com/trial/secure/generate-eval-key.php?product=ltmve On our website we make sure that all links are working and are up to date. Therefore, if the previous link does not work, please go to: a. https://www.f5books.eu/building-your-own-lab/ Fill out the trial evaluation form and click on Request License Key. Please verify that the email address assigned to your account is correct as the Base Registration Key will be sent to this address. In the email you will also receive a link to where you can download the BIG-IP VE machine. Ignore this as we require a specific version for our lab exercises.

Downloading the BIG-IP VE Virtual Machine For our lab exercises we require you to run version/build 12.1.2.0.0.249. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

To download your BIG-IP VE virtual machine, visit https://downloads.f5.com. Click on Find a Download. Choose the Product Line BIG-IP v12.x / Virtual Edition. In the drop-down menu choose 12.1.2. Click on Virtual-Edition. If necessary, accept the Software Terms and Conditions. Click on BIGIP-12.1.2.0.0.249.ALL-scsi.ova. Select the download mirror appropriate for your location. When clicking on one of the links a download pop-up will appear. Save the BIGIP-12.1.2.0.0.249.ALL-scsi at a convenient location. we’ll be using this later in this exercise. If the previous instruction does not work, please visit https://www.f5books.eu/building-your-own-lab/. On this website we keep updated instructions and links to the lab components necessary for the lab exercises.

91 91


Downloading the Client Virtual Machine 1. 2. 3.

To download the Client Virtual Machine, go to the following web page: https://www.f5books.eu/building-yourown-lab/ Go to the Client Virtual Machine section and select one of the download mirrors available on the webpage. Save the virtual machine image at a convenient location. we’ll be using this later in this exercise.

Downloading the Apache Server Virtual Machine 1. 2. 3.

To download the Apache Server Virtual Machine, go to the following web page: https://www.f5books.eu/building-your-own-lab/ Go to the Apache Server Virtual Machine section and select one of the download mirrors available on the webpage. Save the virtual machine image at a convenient location. we’ll be using this later in this exercise.

Importing the F5 BIG-IP Virtual Machine into VMware Workstation Player 1. 2.

Start VMware Workstation Player. You should be presented with a licensing screen. Simply select Non-Commercial use only and you will arrive at the Welcome to VMware Workstation 12 Player screen.

3.

Once you are at the welcome screen, click on the Player tab and select File > Open.

92 92


4. 5.

Navigate to the location where you saved the OVA files and select BIGIP-12.1.2.0.0.249.ALL-scsi This will launch the Import Virtual Machine wizard. Here you can rename the Virtual Machine and select where you want to store it. Make sure that the location you choose to store it has enough disk space. Once you are done, click Import.

6. 7.

Next you will receive the License Agreement for the Virtual Machine. Click Accept to continue. Now the Virtual Machine is being imported into VMware Workstation Player. This might take a while depending on what hardware you are using. Once the import is complete, the virtual machine should end up in the library list.

8.

93 93


Importing the Client Virtual Machine into VMware Workstation Player 1. 2. 3. 4. 5.

94 94

Start VMware Workstation Player. You should be presented with the library screen. Click on the Player tab and select File > Open. Navigate to the location where you saved the OVA files and select F5_Lab_Client_vX.ova. Where X represents the current version of the OVA build. This will launch the Import Virtual Machine wizard. Here you can rename the Virtual Machine and select where you want to store it. Make sure that the location you choose to store it has enough disk space. Once you are done, click Import.


6. 7.

Now the Virtual Machine is being imported into VMware Workstation Player. This might take a while depending on what hardware you are using. Once the import is complete, the virtual machine should end up in the library list.

Importing the Apache Server Virtual Machine into VMware Workstation Player 1. 2. 3. 4. 5.

95 95

Start VMware Workstation Player. You should be presented with the library screen. Click on the Player tab and select File > Open. Navigate to the location where you saved the OVA files and select F5_Lab_ApacheServer_vX.ova. Where X represents the current version of the OVA build. This will launch the Import Virtual Machine wizard. Here you can rename the Virtual Machine and select where you want to store it. Make sure that the location you choose to store it has enough disk space. Once you are done, click Import.


6. 7.

Now the Virtual Machine is being imported into VMware Workstation Player. This might take a while depending on what hardware you are using. Once the import is complete, the virtual machine should end up in the library list.

Exercise 1.3 – Editing the Virtual Machine settings Exercise Summary In this exercise, we’ll continue to set up our lab environment. You should now have all three machines imported into VMware Workstation Player. Next, we need to configure the network interfaces on each machine so that they reside on the correct network. In our lab we’ll use what is known as a LAN Segment. A LAN Segment is a private network that can be shared with other virtual machines.

96 96


Exercise Prerequisites Before you start this lab exercise make sure you have the following: â–Ş â–Ş

Successfully installed VMware Workstation Player Successfully imported the machines F5_Lab_ApacheServer, F5_Lab_Client and BIGIP-12.1.2.0.0.249.ALLscsi.

Editing the Virtual Machine Settings for the F5 BIG-IP Virtual Machine 1. 2. 3. 4. 5.

Start VMware Workstation Player. You should be presented with the library screen. Click on the virtual machine named BIGIP-12.1.2.0.0.249.ALL-scsi. Click on Edit virtual machine settings. Click on the network adapter in the top of the list.

6.

Click on LAN Segments. This will launch a separate window where you can create LAN Segments.

97 97


7. 8.

Create the first LAN Segment called MGMT by clicking Add and writing the name MGMT. Create the next two LAN Segments using the same method. Name the them, Internal and External. When you are done, it should look like this:

9. Click OK twice to save the configuration. 10. Reopen the Virtual Machine settings by click Edit virtual machine settings. 11. Now for the first Network Adapter, assign it the LAN Segment called MGMT.

98 98


12. For the second Network Adapter, assign it the LAN Segment called External.

99 99


13. For the third Network Adapter, assign it the LAN Segment called Internal.

14. Click OK to save the configuration for the virtual machine.

Editing the Virtual Machine Settings for the Client Virtual Machine 1. 2. 3.

Click on the virtual machine named F5_Lab_Client_vX Click on Edit virtual machine settings. Now for the first Network Adapter, assign it the LAN Segment called MGMT.

100 100


4.

For the second Network Adapter, assign it the LAN Segment called External.

5.

Click OK to save the configuration for the virtual machine.

Editing the Virtual Machine Settings for the Apache Server Virtual Machine 1. 2. 3.

Click on the virtual machine named F5_Lab_ApacheServer_vX Click on Edit virtual machine settings. This virtual machine only has one Network Adapter, assign it the LAN Segment called Internal.

101 101


4.

Click OK to save the configuration for the virtual machine.

Exercise 1.4 – Starting Up the Virtual Machines Exercise Summary In this exercise, we’ll start all of our virtual machines and perform some final tweaking. We’ll also make sure you can access the management interface of the BIG-IP system.

Exercise Prerequisites Before you start this lab exercise make sure you have the following: ▪ ▪

Successfully installed VMware Workstation Player Successfully imported the machines F5_Lab_ApacheServer, F5_Lab_Client and BIGIP-12.1.2.0.0.249.ALLscsi. Successfully created all of the LAN Segments and assigned to the correct interfaces.

Starting the F5 BIG-IP Virtual Machine 1. 2. 3. 4. 5. 6.

Start VMware Workstation Player. You should be presented with the library screen. Click on the virtual machine named BIGIP-12.1.2.0.0.249.ALL-scsi. Click on Play virtual machine. This will start the virtual machine. The screen will turn black and prompt the message: GRUB Loading Stage 2.. The startup of the BIG-IP might take up to 10 minutes. Simply let it be and it will eventually give you the following screen:

102 102


Starting the Apache Server Virtual Machine 1.

2. 3. 4. 5. 6.

Start another instance VMware Workstation Player. VMWare Workstation Player only has support for one machine at a time. Therefore, in order to launch more than one, simply start VMware Workstation Player again. Once you have started VMware Workstation Player you should be presented with the library screen. Click on the virtual machine named F5_Lab_ApacheServer_vX. Click on Play virtual machine. This will start the virtual machine. The screen will turn black and text from the bootup will be printed out. After a few minutes the operating system should be fully loaded and you should be presented with the following screen:

103 103


Starting the Client Virtual Machine 1. 2. 3. 4. 5. 6.

Start another instance VMware Workstation Player. Once you have started VMware Workstation Player you should be presented with the library screen. Click on the virtual machine named F5_Lab_Client_vX. Click on Play virtual machine. This will start the virtual machine. The screen will turn black and text from the bootup will be printed out. After a few minutes the operating system should be fully loaded and you should be presented with the desktop:

Changing the Keyboard Layout for the Client Virtual Machine The default keyboard layout of this Linux machine is English (US). This may not be the preferred keyboard layout and if you need to change it please perform the following steps: 1.

In the bottom right corner there is an icon of the American flag.

2. 3.

Right Click on this Icon in order to launch a menu. Click on the item named “Keyboard Layout Handler� Settings.

104 104


4.

In this window, uncheck the setting named Keep system layouts:

5. 6.

This will unlock the possibility to add/remove keyboard layouts. Click on Add and in the list of available keyboard layouts, select the layout you wish to use and simply press OK. The new keyboard layout should now be added to the list of keyboard layouts.

7.

105 105


8.

In order to eliminate the risk of the linux host changing the keyboard layout, remove the US keyboard layout by selecting it and clicking Remove. When you are done the final results should look like this:

9.

Save your settings by clicking Close.

106 106


4. Introduction to LTM - Initial Access and Installation Before we can get started load balancing traffic to our servers, we need to perform the initial installation of the BIG-IP device. This chapter will take you through each step and the options available to you.

The BIG-IP LTM Module Local Traffic Manager (LTM) is a core feature module for the Traffic Management Operating System (TMOS) that runs on BIG-IP platforms, physical or virtual. LTM is today’s version of the original product that F5 the company was created to provide. Most other modules rely upon LTM to provide their own functions. The LTM’s purpose is to manage and load balance various types of traffic, services and applications. These can be web, file, proxy, DNS or email servers, caches, voice services and even IP routers. The LTM module is highly configurable and offers a multitude of features that solve many of the challenges that service providers and enterprise companies face today. Traffic management functions relate to the many things beyond mere load balancing that LTM is capable of, including contextual traffic routing, programmatic traffic manipulation, security, monitoring and much more.

Initial Setup There are three steps you need to perform in order to get any BIG-IP system up and running: 1. 2. 3.

Configure an IP address, mask and, if required, default gateway for the dedicated Management (mgmt) interface. License the system. Use the Setup Utility in the Configuration Utility WebGUI to specify basic device configuration settings including: the root and administrator passwords, module provisioning, interfaces, VLANs and self IP addresses. If the default management interface IP addressing configuration is appropriate for your environment, then you will not have to change its values.

Configuring the Management Port IP Address The default management port IP address of the BIG-IP system is 192.168.1.245/24. There are, however, some scenarios where DHCP will try to obtain an IP address from a DHCP server and if successful, this will be assigned to the management port instead. Here are all the possible scenarios: ▪ ▪ ▪

Where DHCP is not enabled, and in the absence of any prior static configuration, the device’s management port will be assigned the address: 192.168.1.245/24. No default route is created, as no default gateway is assigned. When DHCP is enabled, but there is no DHCP server available, or DHCP fails for any reason, the 192.168.1.245/24 address is again assigned. If DHCP is enabled and works successfully, the assigned IP address (and possibly default gateway) can be observed using the LCD panel or the CLI via the serial port (using command ip address show eth0).

Physical devices are (by default) not configured to use DHCP on the management port. However, Virtual Edition virtual machine images are configured, with the exception of .iso image files.

107 107


The management interface IP address can be set or modified using any of these methods: 1. 2. 3. 4.

The LCD panel on the appliance. The config command in the CLI. tmsh (traffic management shell). The Web GUI, but only if an address is already configured.

Configuration via the LCD Panel If you have physical access to a device, you can configure the management interface IP address using the LCD panel, as follows: 1. 2.

Use the red X button to put the LCD into menu mode. Use the navigation arrows to navigate to the System menu and use the Check mark (green tick) button to select it. 3. Select Management and press the Check button. 4. Select Mgmt IP and press the Check button. 5. Enter the IP address you want to use and press the Check button. 6. Enter the relevant netmask and then press the Check button. 7. Use the navigation arrows and select Mgmt Gateway and press the Check button. 8. Enter the default gateway you would like to use and press the Check button. 9. Use the navigation arrows and select Commit and then press the Check button. 10. Select OK and then press the Check button. Here’s an image of the LCD panel:

Configuring the Management IP address Using the Touch LCD Panel (iSeries platforms) On the new iSeries platform, the LCD panel has become a touch display. In order to configure the management IP address using the touch LCD panel, use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Touch the screen to activate Menu mode for the LCD. Tap Setup. Tap Management. For the Type setting, tap to select IPv4 or IPv6. Tap IP Address. Use the arrows on the screen to configure the management IP address and the prefix length. Tap Commit to save your changes. Scroll down and tap Gateway. Use the arrows on the screen to configure the default management route. If you do not have a default route, enter 0.0.0.0.

108 108


11. Tap Commit to save your changes.

Configuration Using the Config Command You’ll need to establish a serial console connection to the device in order to use this method. For a physical device, connect to the console port using a suitable cable and a terminal application. For a virtual machine, you’ll use a hypervisor console. If the device already has an IP address assigned, you can, of course, also use SSH. When using a physical serial console port, set the baud rate in your terminal application to 19200 as this is the default.

By default, the admin user account cannot login to the CLI.

1.

Launch a terminal client such as PuTTY and use one of the following methods: a. SSH to xxx.xxx.xxx.xxx. Where xxx.xxx.xxx.xxx is the management IP address. b. Connect using the Serial Console Port. Select the COM port that is connected to the BIG-IP system. 2. Log in using the default user account root and the password default. 3. Type the following command to launch the application: config 4. On the introduction screen (shown below), press [Enter] to confirm OK. 5. On the Configure IP Address screen, you will be asked if you would like to Use Automatic Configuration of IP Address. The default value is No. Confirm this setting by pressing [Enter]. 6. On the Configure IP Address screen, enter the IP address you would like to use. When done, use [Tab] to navigate to the OK selection and press [Enter] to move forward to the next screen. 7. On the Configure Netmask screen, enter the associated netmask. Use [Tab] to navigate to the OK and press [Enter] to move forward to the next screen. 8. On the next screen, you configure the Management Route. This is the default gateway that the HMS (Host Management System) operating system uses. If you have a default gateway for the management port, select Yes. If not, then select No. 9. If you selected Yes, enter the IP address of a suitable default gateway. 10. If you selected No or after you’ve entered a default gateway on the Confirm Configuration screen, select Yes to save the configuration.

109 109


Configuration Using TMSH You’ll need to establish a serial console connection to the device in order to use this method. For a physical device, connect to the console port using a suitable cable and a terminal application. For a virtual machine, you’ll use a hypervisor console. If the device already has an IP address assigned, you can, of course, also use SSH. When using a physical serial console port, set the baud rate in your terminal application to 19200 as this is the default. By default, the admin user account cannot login to the CLI. Once you’re connected and logged in, follow these steps: 1.

2. 3.

4.

Launch a terminal client such as PuTTY and use one of the following methods: a. SSH to xxx.xxx.xxx.xxx. Where xxx.xxx.xxx.xxx is the management IP address. b. Connect using the Serial Console Port. Select the COM port that is connected to the BIG-IP system. Log in using the default user account root and the password default. You will either be at a Linux host shell prompt or directly in the tmsh. This is indicated by the prompt in the terminal program.; a. Linux Host: config # b. TMSH: (/Common)(tmos)# The Linux host shell prompt is the default for the root user. In order to enter tmsh type the following command:

tmsh 5.

In order to set the management IP address, type the following command: create /sys management-ip

[ip address/netmask] 6.

To optionally configure the management route, type the following command: create /sys management-

route default gateway [gateway ip address] 7.

In order to to save the configuration, type the following command: save /sys config partitions all

110 110


The configuration changes you make in tmsh are only saved to the running configuration. If the BIG-IP device is rebooted the settings will be lost. Therefore, you must save the running configuration to the startup configuration by utilising the command save /sys config.

Configuration Using the WebGUI This method requires the device to already have an IP address and default gateway configured. The Configuration Utility, usually called the WebGUI, is a browser-based interface that gives you secure access to your BIG-IP device for real-time configuration. An Apache OpenSSL web server runs on the BIG-IP device in the HMS to provide this interface. To access the Web GUI, browse to the management IP address of the device ensuring you use https://. You may also be able to use a self IP address of the device if configured and permitted. When accessing the BIG-IP system for the first time using the WebGUI you will be prompted to run the Setup Utility which is a setup wizard created to assist the BIG-IP administrator to perform initial configuration of the system. Using this wizard, you will have the opportunity to change the management port address. The following instructions presume that this has already been run.

To modify the management interface address, follow these steps; 1.

2. 3. 4. 5. 6. 7.

Open up a browser session to https://xxx.xxx.xxx.xxx. Where xxx.xxx.xxx.xxx is the management IP address. On the first log-on attempt for that particular browser you will be prompted with a certificate error, but this is normal. The BIG-IP system is shipped with a self-signed certificate which will not be validated by the web browser. Accept the certificate and when the webpage has been retrieved it should load up the logon screen. Log in to the BIG-IP system using the default user name admin and the password admin. Navigate to System > Platform. Under General Properties change the Management Port Configuration to Manual. Enter the required management IP address and netmask. Optionally, enter a management route (a default gateway). Save your configuration by clicking Update. The HMS runs RedHat Linux operating system and provides the various interfaces and tools used to manage the system such as the WebGUI, tmsh CLI, DNS client, SNMP and NTP.

Licensing the BIG-IP System Once the device has a management IP address, it needs to be licensed using the Web GUI. This requires a base registration key; a 27-character long string stored in the /config/RegKey.license file, which uniquely identifies the device. This is used by the F5 license server to associate the device with the licensed modules you’ve paid for and thus enable their use on the device.

111 111


Not all systems are shipped with the base registration key (for instance the Virtual Editions), in which case, it must be manually entered. The registration key is presented in the following format: AAAAA-BBBBB-CCCCC-DDDDD-EEEEEEE Make a note of the base registration key and keep it in a safe place because it uniquely identifies the device. In some scenarios, such as an upgrade, the RegKey.license file may be deleted. Subsequently, when you need to update the license of the system you won’t be able to. In order to retrieve the Base Registration Key you will have to open up a ticket with F5 Support and this might delay the process, causing a long outage of your system.

The BIG-IP system uses the base registration key to generate what is known as a dossier, which is what is actually passed to the F5’s license server. The dossier contains numerous encrypted characters that uniquely identify your system. Multiple options are stored in the dossier including registration key and the system time. To make sure nothing goes wrong when generating the dossier, verify that the system time is correct. On physical appliances the base registration key is already present on the box. If you are using a virtual edition you will most likely need to enter the registration key manually.

Automatic License Activation In order to use automatic license activation, the BIG-IP device needs Internet access to reach the F5 licensing servers. Therefore, you will need to make sure a device is configured with a suitable IP address, default route and DNS server(s). These can be configured either manually or through DHCP. We have already covered configuration of the management interface IP address, mask and default gateway (and consequently default route), but we haven’t covered manual DNS configuration. Prior to the system being licensed, the only available method to manually configure DNS servers is via the CLI, using this command: $ [tmsh] modify sys dns name-servers add { 10.11.12.99 } With the Automatic method, the BIG-IP device generates a dossier and automatically sends it to the F5 license server. The F5 license server matches the dossier against its database, generates the license and sends it back to the device. The license is then installed on the BIG-IP device. The steps required to perform automatic license activation are as follows: 1.

2. 3. 4. 5.

Open up a browser session to https://xxx.xxx.xxx.xxx. Where xxx.xxx.xxx.xxx is the management IP address. On first logon for that particular browser, you will be prompted with a certificate error, but this is normal. The BIG-IP system is shipped with a self-signed certificate which will not be validated by the web browser. Accept the certificate and when the webpage has been retrieved it should load up the logon screen. Log in to the BIG-IP system using the default user name admin and the password admin. You will be presented with the Welcome screen, click Next to launch the Setup Utility. At the License page, click Activate. If the Base Registration Key value is not present, enter it.

112 112


6. 7. 8. 9. 10.

Select Activation Method: Automatic For the Outbound Interface setting, select the mgmt interface. Click Next to activate your BIG-IP device. On the next page, press Accept to accept the EULA. Wait while the BIG-IP device communicates with the F5 licensing servers, uploads the dossier, installs the License, and verifies its configuration. 11. Click Continue to load the Resource Provision page (we’ll cover Provisioning shortly).

Once the license has been installed, it is stored in the /config/bigip.license file.

Manual License Activation This method is used when the BIG-IP device does not have direct access to the Internet. It can also be necessary if the owner of the BIG-IP device wants to keep all the dossiers that the BIG-IP device generates. This activation method is, as you’d imagine, more involved than automatic activation. The dossier must be generated and then downloaded or copied to the connecting client’s clipboard using the WebGUI. A host must then submit the dossier to the F5 licensing server via a web page and obtain the generated license. The license is then transferred back to the BIG-IP device; again, either uploaded as a file or pasted from the clipboard, where it gets activated. In most scenarios, the device that you are using to access the BIG-IP device will have Internet access, so you can do it all using the same web browser that you use to manage the BIG-IP device. The steps required to perform manual license activation are as follows: 1.

2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Open up a browser session to https://xxx.xxx.xxx.xxx. Where xxx.xxx.xxx.xxx is the management IP address. On first logon for that particular browser, you will be prompted with a certificate error, but this is normal. The BIG-IP system is shipped with a self-signed certificate which will not be validated by the web browser. Accept the certificate and when the webpage has been retrieved it should load up the logon screen. Log in to the BIG-IP system using the default user name admin and the password admin. You will be presented with the Welcome screen, click Next to launch the Setup Utility. At the License, page click Activate. If the Base Registration Key value is not present, enter it. Select Activation Method: Manual and click Next On the next page the dossier has been generated. Copy the dossier to the clipboard and either click the link “Click here to access F5 Licensing Server” or open up a new web browser (tab) and browse to: http://activate.f5.com. In the text box Enter your dossier, paste in the dossier that the BIG-IP device generated and click Next. On the Accept User Legal Agreement, check the box “I have read and agree to the terms of this license” and click Next Copy the license that was generated to your clipboard. Head back to the BIG-IP device and paste the license into the License box and click Next. Wait whilst the BIG-IP device installs the License and verifies its configuration. Click Continue to load the Resource Provision page.

113 113


When performing a license reactivation, the BIG-IP system may reload the configuration which will temporarily interrupt traffic processing.

Once the license has been installed it is stored in the /config/bigip.license file.

Provisioning Before you can start working with the modules that you plan on running on your BIG-IP system, you will first need to provision them. Provisioning can be done under System > Resource Provisioning. In this list you will be able to see all modules that currently exist, but note that even though they exist in this table they are not necessarily provisioned. You see, the license will state which modules you have the right to provision by reviewing the License State. Even if the module is set to Licensed it will still not be provisioned unless you click in the box named None and select one out of five different levels.

114 114


Keep in mind that provisioning will only give you some control over how much CPU, RAM and disk resources each module uses, you cannot specify an exact value. The resources it receives will be determined by the provisioning level that you have selected. You will be able to choose between the following levels: ▪

Dedicated – You will use this if you only run one module on your BIG-IP device.

Nominal – This will give the module the minimum resources available in order to run the module, and if there are resources to spare, this will also be available for the module. It will give a majority of the system resources to the module.

Minimum – This will give the module the minimum resources available in order to function and if there are resources to spare this may be distributed to other modules.

None – None is the same as having a module turned off. Meaning that the module is not provisioned.

Lite – This is used for selected modules that grant limited features for trial purposes.

When you provision multiple modules, you choose between Nominal and Minimum (Dedicated is only used for one module and None is the same as the module being turned off). This is very limiting, as you cannot specifically configure a set amount of memory or define how many CPU cores the modules should have.

The Setup Utility The Setup Utility is a quick and easy way to get started with your BIG-IP device once it is licensed and provisioned. It helps you configure the passwords of the system, networking, HA and other management settings. Here are some examples of what is configured during the Setup Utility process: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Device certificates Host name Time zone Passwords for the root (used for CLI) and the admin (used for the Web GUI) account SSH access Self-IP addresses HA VLANs NTP DNS ConfigSync Failover Mirroring

Self-IP Addresses During the Setup Utility, you will configure what is known as self-IP addresses. A self-IP address is an IP address on the BIG-IP system which you associate with a VLAN. The BIG-IP will use this IP address to communicate with hosts in that particular VLAN, whether it is monitoring or application traffic.

115 115


When the BIG-IP system is configured in a High-Availability setup, you will also configure a floating self-IP address. This is the same as a regular self-IP address, but the ownership of this address will change depending on which BIG-IP system is active. In the following lab exercises, you will configure both a self-IP address and a floating self-IP address. This is part of the Setup Utility which you will run during the lab exercise. You can configure both, even though you do not configure the BIG-IP system in a high-availability setup. In case you add another BIG-IP system later on, there will already be a floating self-IP address configured.

Lab Exercises: Initial Access and Installation Exercise 2.1 – License, Provision and Initial Setup Exercise Summary In this exercise, we’ll go through the Licensing, Provisioning and the Initial Setup of your BIG-IP system. These are the actions necessary to get your BIG-IP system up and running and you will learn the following: ▪ ▪ ▪ ▪

How to access your BIG-IP system using the WebGUI. How to license your BIG-IP system. How to provision your BIG-IP system. How to create a baseline configuration using the Setup Utility.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Obtained a BIG-IP system’s base registration key. Obtained through lab exercise Exercise 1.2. Access to the Internet.

▪ ▪ ▪

Access the WebGUI via the Management Port 1.

2. 3. 4.

Open up a browser session to https://192.168.1.245. You will be prompted with a certificate error, but this is normal. The BIG-IP system is shipped with a self-signed certificate which will not be validated by the web browser. Accept the certificate, this will load up the logon screen. Log in to the BIG-IP system using the default username admin and the password admin. When logging on to the BIG-IP system for the first time you should be presented with the Setup Utility. Click Next in order to start the Setup Utility. When the Setup Utility starts, it will immediately go to the License page. Click Activate button in order to start the licensing process.

116 116


License your BIG-IP system 1.

Use the Base Registration Key in order to generate a dossier. If the base registration is already prepopulated, then follow the instruction present in 1a. If the base registration key is not prepopulated, then follow the instruction present in 1b. a.

If your Base Registration Key is already prepopulated, select activation method Manual and click Next.

Setup Utility Setup Utility > License General Properties Activation Method When done, click b.

Manual Next

If your Base Registration Key is not already prepopulated, enter the following values:

Setup Utility Setup Utility > License General Properties Base Registration Key Add-On Registration Key List Activation Method When done, click Next 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Make sure that Manual Method is set to Download/Upload File In the Step 1: Dossier area click Click Here to Download Dossier File. Save the dossier.do file on your client computer. In the Step 2: Licensing Server area, click Click here to access F5 Licensing Server. This will launch a new web browser session to the F5 Licensing Server. When you are at the Activate F5 Product web page under Select Your Dossier File, click on browse. Browse to the dossier.do file you just downloaded. When done, click Next. On the Accept User Legal Agreement page, check the I have read and agree to the terms of this license. When done, click Next. On the next page, click Download license. Save the license.txt file on your client computer. Go back to your web browser session that is connected to the BIG-IP system’s WebGUI. In the Step 3: License area, click Browse and browse to the license.txt file. Select the license.txt file and click Open. When done, click Next. You will be prompted with a white box stating, “BIG-IP system configuration has changed”. Once it is done, click Continue and you will be presented with the Resource Provisioning page.

117 117

Enter the base registration key you obtained in Exercise 1.2. Leave blank Manual


Provisioning Your BIG-IP System 1.

On the Resource Provisioning page, provision your BIG-IP system using the following settings:

Setup Utility Setup Utility > Resource Provisioning Module Management (MGMT) Small Local Traffic (LTM) Nominal When done, click Next Your BIG-IP system may produce a warning message stating that certain system daemons may restart, or the system may reboot causing your web browser session to wait up to several minutes. This is normal when modifying the resource provisioning of the BIG-IP system.

Configuring the Device Certificates 1.

Next, you will be presented with the Device Certificates page. Keep the default values and move on to the next page by clicking Next.

Configuring the Platform Settings 1.

On the Platform page, configure your BIG-IP system using the following settings:

Setup Utility Setup Utility > Platform General Properties Management Port Configuration Management Port

Host Name Host IP Address Time Zone User Administration Root Account

Password: f5training Confirm: f5training Password: f5training Confirm: f5training Enabled * All Addresses

Admin Account SSH Access SSH IP Allow When done, click

Manual IP Address [/prefix]: 192.168.1.245 Network Mask: 255.255.255.0 Management Route: Leave Blank bigip1.f5lab.com Leave default Select the time zone appropriate for your location

Next

You will be prompted with a notice that you have changed the password and will therefore have to re-login to the device again.

118 118


2.

Log back into the BIG-IP system using the admin account with the password f5training. Once logged in, you will be redirected to the Setup Utility > Network page.

Performing the Standard Network Configuration 1. 2.

On the Setup Utility > Network page, under the Standard Network Configuration, click Next. On the Setup Utility > Redundancy page, ensure that it contains the following settings:

Setup Utility Setup Utility > Redundancy Redundant Device Wizard Options Config Sync High Availability When done, click 3.

Check the box Display configuration synchronization options Check the box Display failover and mirroring options Select Network for the Failover Method Next

Next, we’ll configure the VLANs and start with the Internal Network configuration. Here we’ll assign the VLANs. This includes the self-IP address, netmask and network interface. On the Setup Utility > VLANs page, enter the following settings:

Setup Utility Setup Utility > VLANs Internal Network Configuration Self IP

Floating IP Internal VLAN Configuration VLAN Name VLAN Tag ID Select the following VLAN Interface and Tagging.

IP Address [/prefix]: 172.16.1.31 Network Mask: 255.255.0.0 Port Lockdown: Allow Default Address: 172.16.1.33 Port Lockdown: Allow Default internal auto VLAN Interfaces: 1.2 Tagging: Untagged

When done, click Add This should result in the following configuration: Interfaces 1.2 (untagged) When done, click Next

Move the Interface 1.2 to Untagged by selecting it and pressing the arrow key.

119 119


4.

Next, we’ll configure the VLAN for the External Network configuration. On this page, enter the following settings:

Setup Utility Setup Utility > VLANs External Network Configuration Self IP

Default Gateway Floating IP External VLAN Configuration VLAN Name VLAN Tag ID Select the following VLAN Interface and Tagging.

IP Address [/prefix]: 10.10.1.31 Network Mask: 255.255.0.0 Port Lockdown: Allow None Leave Blank Address: 10.10.1.33 Port Lockdown: Allow None external auto VLAN Interfaces: 1.1 Tagging: Untagged

When done, click Add This should result in the following configuration: Interfaces 1.1 (untagged) When done, click Next 5.

Next, we’ll configure the High Availability Network Configuration. Even though we’ll not configure our BIG-IP system in a high-availability setup we can still add the configuration to prepare it. For the High Availability communication, we’ll use the existing internal VLAN which we created earlier in the Setup Utility. On the Setup Utility > VLANs page enter the following settings:

Setup Utility Setup Utility > VLANs High Availability Network Configuration High Availability VLAN Click the Select existing VLAN button Select VLAN internal When done, click Next 6. 7. 8.

On the next page, we’ll be asked to configure NTP. This is not necessary for the lab exercises. Skip to the next page by clicking, Next. On the next page, we’ll be asked to configure DNS. This is not necessary for the lab exercises. Skip to the next page by clicking, Next. Next, we’ll configure the local address of the ConfigSync. On the Setup Utility > ConfigSync page enter the following settings:

120 120


Setup Utility Setup Utility > ConfigSync ConfigSync Configuration Local Address When done, click 9.

172.16.1.31 (internal) Next

On the next page, we’ll configure the failover configuration. On the Setup Utility > Failover page, use the default settings specified in the following table:

Setup Utility Setup Utility > Failover Failover Unicast Configuration Local Address | Port | VLAN Failover Multicast Configuration Use Failover Multicast Address When done, click

172.16.1.31 192.168.1.245

1026 1026

internal Management Address

Unchecked (Disabled) Next

10. Next, we’ll configure the mirroring configuration. On the Setup Utility > Mirroring page, use the default settings specified in the following table:

Setup Utility Setup Utility > Mirroring Mirroring Configuration Primary Local Mirror Address Secondary Local Mirror Address When done, click

172.16.1.31 None Next

11. Now we’ll finish the Setup Utility as we’ll not configure the BIG-IP system in a redundant high availability pair. On the Setup Utility > Active/Standby Pair page, under Advanced Device Management Configuration click Finished.

121 121


Once you are done with the Setup Utility you be redirected to the Statistics page and at the top of the browser you will be presented with the message Setup Utility Complete. This is presented with the following message:

12. Log out from the BIG-IP WebGUI by clicking the Log out button and close down your web browser.

Exercise 2.2 – Verifying Administrative Access Exercise Summary In this exercise, we’ll verify the access to the BIG-IP system and that everything is working as it should be after the Initial Setup and also change the Port Lockdown settings. In this lab, we’ll perform the following: ▪ ▪

Test and verify the access to the BIG-IP system. Change the Port Lockdown settings.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. A web browser and a terminal client such as PuTTY.

▪ ▪

Verify HTTPS Access to the Management Port 1. 2.

Open up a browser session to https://192.168.1.245 Log on to the WebGUI using the account admin and the password f5training.

Were you able to connect and log in? You should be able to do this. If you cannot connect and login, you will have to verify your configuration. 3.

Log out from the BIG-IP WebGUI system by clicking the Log out button

Verify HTTPS Access to the External Port 1.

Open up a browser session to https://10.10.1.31 Were you able to connect? You should not be able to. This is because the current Port Lockdown setting of the External VLAN is set to Allow None. We configured this during the Setup Utility. This is good practice and you should never allow external access to your BIG-IP system if you do not have a specific requirement to do so. To solve this, proceed with the next step of this exercise.

2. 3. 4.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Network > Self IPs and click on the address 10.10.1.31. This will open up the configuration for that self IP address. On the Network > Self IPs > 10.10.1.31 page, change the following configuration:

122 122


Network > Self IPs > 10.10.1.31 Configuration Port Lockdown Custom List When done, click

Select Allow Custom Check the TCP and the Port. Enter the port 443 and press Add. Update

The results should look like the following diagram:

5. 6. 7.

8.

Log out from the BIG-IP WebGUI system by clicking the Log out button. Try to access the WebGUI once again using a new browser session to https://10.10.1.31. Since we have modified the Port Lockdown setting, you should now be able to access the WebGUI on the external Interface. You will be prompted with a certificate error, but this is normal. The BIG-IP system is shipped with a selfsigned certificate which will not be validated by the web browser. Accept the certificate, this will load up the logon screen. Log in to the WebGUI using the admin credentials.

123 123


9.

Now try to access the WebGUI on the external floating self-IP address. Open up a browser session to https://10.10.1.33. Did you succeed? You should not be able to access the WebGUI on the floating self-IP address. This is again caused by the Port Lockdown setting as this is configured on a per self-IP basis. If you would like to access the WebGUI on the floating self-IP address, then use the same configuration as in the previous scenario but instead use the floating self-IP address. 10. Log out from the BIG-IP WebGUI system by clicking the Log out button.

Verify SSH Access to the Management Port 1. 2. 3.

Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. You will be presented with a security alert because the BIG-IP system is presenting a certificate that is not cached on the desktop. Simply click Accept and it will continue to the log in prompt. Log on using the account root and the password f5training.

Were you able to connect and log in? You should be able to do this. If you cannot connect and login you will have to verify your configuration. 4.

Close down the SSH session by typing:

[root@bigip1:Active:Standalone] config # exit Verify SSH Access to the External Port 1.

Launch a terminal client such as PuTTY and SSH to 10.10.1.31 on port 22. Are you able to connect?

You should not be able to connect with SSH to the external port. But why? What is causing this problem? Like we mentioned in the earlier exercise, Port Lockdown prohibits access to the external port and in the earlier exercise we only allowed access for HTTPS and not SSH. In order to allow access, perform the same configuration changes but instead add TCP and Port 22.

Verify SSH Access for the Admin Account to the Management Port 1. 2.

Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. Log on using the account admin and the password f5training.

What happens? Does the connection fail? By default, the admin account does not have any access to the command line, so the result should be that the SSH connection would be immediately terminated. However, this can be changed. To change this setting, continue to the next step of this exercise. 3. 4. 5.

Open a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to System > Users > User List and click on the admin account. This will open the configuration of the admin account. On the System > Users > User List > admin page, change the following configuration:

System > Users > User List > admin Account Properties Terminal Access When done, click 6.

Log out from the BIG-IP WebGUI system by clicking the Log out button.

124 124

tmsh Update


7. 8. 9.

Try again to launch a terminal client and SSH to 192.168.1.245 on port 22. Log on using the admin credentials. This time you should be able to log on but you will immediately log on, to tmsh as this is the Terminal Access we specified under the admin account configuration. Close down the SSH session by typing:

admin@(bigip1)(cfg-sync Standalone)(Active)(/Common)(tmos) # quit Verify Root Access to the WebGUI 1.

Open a browser session to https://192.168.1.245 and login using the root credentials. Were you able to log in? The root account does not have access to the WebGUI therefore this attempt should fail. This behaviour is default and cannot be changed.

Exercise 2.3 – Backing up the Configuration Exercise Summary In this exercise, we’ll save the current configuration of the BIG-IP system to store a baseline configuration of the system which we can revert to when needed. It will also be used for backup purposes. In this lab, we’ll perform the following: ▪

Create a UCS archive of the BIG-IP system configuration. We’ll cover backups and UCS archives in greater detail in the Maintain Configuration chapter but to give you a short description, UCS is a compressed archive that contains a snapshot of the BIG-IP system. It contains all the configuration files, the BIG-IP license, User Accounts and their passwords. It will also contain the SSL certificates that you have uploaded to the device (including private keys if not selectively excluded).

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Storage on your client computer where you can store the backup.

▪ ▪

Creating a UCS Archive of the BIG-IP Configuration 1. 2. 3. 4. 5. 6. 7.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to System > Archives and in the upper right corner, click Create. Under File Name enter baseline.ucs and click Finished. This will generate the UCS archive and it can take up to a couple of minutes before it is finished. Once it is finished, it will prompt a message stating: /var/local/ucs/baseline.ucs is saved. Click OK in order to get redirected back to the Archives page. On the Archives page, click on baseline.ucs. This will open up the properties of the baseline.ucs archive. Under Archive File, click the Download: baseline.ucs. This will download the UCS archive. Browse to a convenient location and save the file. Now you have a backup of the Initial Setup that you have verified.

125 125


Chapter Summary ▪

The management interface IP address can be set or modified using the LCD panel on the appliance, the CLI (bash), tmsh (traffic management shell) or the Web GUI.

When using a physical serial console port, set the baud rate in your terminal application to 19200 as this is the default.

The default account for the CLI access is root with the password default.

The BIG-IP system uses the base registration key to generate what is known as a dossier, which is what is actually passed to the F5’s license server. The dossier contains numerous encrypted characters that uniquely identify your system.

The BIG-IP system can be licensed using the Automatic or Manual Licensing Method.

Before you can start working with the modules that you plan on running on your BIG-IP device, you need to provision them. Provisioning gives you some control over how much CPU, RAM and disk resources each module uses.

The Setup Utility is a quick and easy way to get started with your BIG-IP device once it is licensed and provisioned. It helps you configure the passwords of the system, networking, HA and other management settings.

126 126


Chapter Review 1. What is the default Management Port IP Address? a. b. c. d.

192.168.1.1/24 172.16.1.245/16 192.168.1.254/24 192.168.1.245/24

2. What command is used to change the Management Port IP address from CLI (bash)? a. b. c. d.

configure edit config ipedit

3. You have established a terminal connection to your BIG-IP system using PuTTY. You are using the admin account but are unable to log in. What is the problem? a. b. c. d.

The admin account is by default not able to log into the CLI. This needs to be configured on the admin account. You have not yet run the Setup Utility. During the Setup Utility you provide the admin access to the CLI. The BIG-IP system does not have CLI access by default, this needs to configured using the WebGUI. You have configured the wrong baud rate.

4. When using the Automatic License Activation Method, what requirements do you need to fulfil? a. b. c. d.

Configure an NTP server. Provide the BIG-IP system with Internet access. Run the Setup Utility prior to running Automatic License Activation Method. Register the device with F5 support.

5. What provision level will give the module the minimum resources available to run the module and if there are resources to spare it will receive this as well? a. b. c. d.

Dedicated Lite Minimum Nominal

127 127


Chapter Review: Answers 1. What is the default Management Port IP Address? a. b. c. d.

192.168.1.1/24 172.16.1.245/16 192.168.1.254/24 192.168.1.245/24

The correct answer is: d Where DHCP is not enabled, and in the absence of any prior static configuration, the device’s management port will be assigned address: 192.168.1.245/24. No default route is created as no default gateway is assigned. 2. What command is used to change the Management Port IP address from CLI (bash)? a. b. c. d.

configure edit config ipedit

The correct answer is: c The management interface IP address can be set or modified in CLI (bash) using the following command: config 3. You have established a terminal connection to your BIG-IP system using PuTTY. You are using the admin account but are unable to log in. What is the problem? a. b. c. d.

The admin account is by default not able to log into the CLI. This needs to be configured on the admin account. You have not yet run the Setup Utility. During the Setup Utility you provide the admin access to the CLI. The BIG-IP system does not have CLI access by default, this needs to configured using the WebGUI. You have configured the wrong baud rate.

The correct answer is: a 4. When using the Automatic License Activation Method, what requirements do you need to fulfil? a. b. c. d.

Configure an NTP server. Provide the BIG-IP system with Internet access. Run the Setup Utility prior to running Automatic License Activation Method. Register the device with F5 support.

The correct answer is: b To use automatic license activation, the BIG-IP device needs Internet access to reach the F5 licensing servers. Therefore, you will need to make sure a device is configured with a suitable IP address, default route and DNS server(s). These can be configured either manually or through DHCP.

128 128


5. What provision level will give the module the minimum resources available in order to run the module and if there are resources to spare it will receive this as well? a. b. c. d.

Dedicated Lite Minimum Nominal

Nominal will give the module the minimum resources available in order to run the module, and if there is are resources to spare, this will also be available for the module. It will give a majority of the system resources to the module.

129 129


5. Local Traffic Objects The BIG-IP Local Traffic Manager (LTM) is only one of many modules that can be run on the BIG-IP system and is the one we’ll be focusing on the most throughout the chapters of this book. As we covered in the 101 Application Delivery Fundamentals Study Guide, the main purpose of LTM is to assist organisations and companies with their application delivery by load balancing traffic between servers, offloading server services such as SSL processing, monitoring applications and ensuring traffic is not sent to a faulty or offline server and adding TCP optimisation techniques. The BIG-IP system is a default deny device, meaning that it will only accept traffic if it is configured to do so. In order for the BIG-IP system to process traffic it needs to be configured with listeners. Virtual Servers are one type of listener and in order for the BIG-IP LTM to process and load balance traffic, it needs to have certain Local Traffic Objects in place. These are Nodes, Pool Members, Pools and Virtual Servers and we’ll cover them all in this chapter.

Nodes Nodes are objects which represent the real servers or other hosts on your network. Nodes are only represented by an IP address and this is important to remember. A node is uniquely identified by its IP address and therefore two nodes cannot have the same IP address (unless they are in different route domains). A node is assigned a service (port) and then added to a Pool. In this moment the node becomes a Pool Member. A single node can be added to multiple pools (or even the same pool but with a different port) and thus logically represent multiple pool members. With a node, you can configure the host’s IP address, Node Name, Health Monitor, Ratio (used with some load balancing methods) and Connection Limit. You do not need to manually create nodes as they will be automatically created when you assign Members to a Pool. However, when assigning members to a pool you cannot specify any non-default configuration options including the name.

Pool Members Pool members are, conceptually, the actual application service that you load balance traffic to. Pool members are nodes and an associated service port (TCP or UDP listening port) that are added to a pool and have traffic load balanced across them. In other words, you can say that a pool member is the server-side listener. A pool member is uniquely identified by its IP address, service port and pool name. The members of a pool can also be members of other pools using the same or a different service port.

Pools A Pool is very similar to a server farm or cluster. It is a logical object that contains one or more pool members that traffic is load balanced across. With very few exceptions, all of the members contained in a pool serve the same content. Pools have many configuration options including Health Monitors and the Load Balancing Method itself. Whenever traffic has been received by a virtual server and it is ready to pass traffic on to the pool member, the BIG-IP system will send the traffic to the pool and this is where the actual load balancing takes place. The pool will choose the best available member based on health monitors and the configured load balancing algorithm.

130 130


When adding pool members to a pool, you can either manually add them or choose from a list of previously configured nodes, but the service port must be specified in all cases. A pool is assigned to a virtual server as the Default Pool unless it will be used as a Clone or Last Hop Pool. You should note that a virtual server isn’t required to have a default pool configured. Also, a pool can still be used even if it is not assigned to a Virtual Server, as it can be referenced using an iRule or a Local Traffic Policy. The Local Traffic Policies feature was introduced in version 11.4 and is an upgraded version of HTTP Class. It has similar features to that of an iRule where you can manipulate traffic based upon certain match conditions such as HTTP header, HTTP URI, HTTP host and perform actions when matched. You can for instance enable or disable compression, forward traffic to a specific pool/node or perform redirects. The huge benefit of using Local Traffic Policies is that they evaluate conditions in parallel, making them faster than iRules. They are also considered to be built-in functions making them preferred over iRules.

Virtual Servers As we previously mentioned, the BIG-IP system is a default deny device. In order for it to take in and process traffic you have to configure a listener. A virtual server is one type of listener. Virtual servers are made up of a virtual IP address and a service port that the BIG-IP systems listens to and receives traffic on. A virtual server configured with a specific IP address is referred as a host virtual server. However, a virtual server can also be configured to listen on a network address, referred to as a network virtual server. How you configure your virtual server to listen really depends on how your application is running and what requirements you have. For a standard virtual server setup listening on a specific port using the TCP protocol, the traffic usually flows in the following manner: 1. 2. 3. 4. 5. 6.

A client wishing to access the application establishes a connection to the virtual server IP address (typically provided via DNS resolution). If the incoming request (a SYN packet) matches the IP address and service (port) of a virtual server then it permits the packet and processes it. Since the BIG-IP system is utilising a full proxy architecture it completes the three-way handshake and establishes a TCP connection with the client. Once the TCP connection is established and the client has sent an application request, the virtual server will load balance it to a particular pool member, decided by the configured load balancing algorithm. Again, since the BIG-IP system is utilising a full proxy architecture, a new TCP connection will be established between the BIG-IP and the pool member on the service (port) that the pool member is listening on. Once established, the BIG-IP system will create its own application request containing the same payload as the client’s and sent it to the pool member. This is because the client-side and server-side connection are completely separate.

It is very important to remember that the BIG-IP system utilises a full proxy architecture and that traffic is actually being listened to on the client-side and server-side of the connection. On the client-side the BIG-IP system is listening for traffic using the virtual server and its associated service (port). On the server-side the pool member is listening for traffic on its associated service (port).

131 131


When you install a BIG-IP system in your environment in order to load balance traffic originating from the Internet, you usually assign an external network range of routable IP addresses that you can use in order to build up externally accessible virtual servers. These IP addresses can then be linked together with an external DNS record in order for the client to easily access the application by just entering the FQDN. Virtual servers don’t just establish connections and load balance to a particular pool member, they also define how traffic should be processed. In order to do so we assign many different types of additional objects. These include pools, VLANs, Local Traffic Policies, Profiles and iRules among many others. Some configuration objects are required, and some are only used depending on an application’s requirements. For instance, every virtual server needs to have a protocol profile such as TCP, UDP or SCTP assigned. This is required to ensure the virtual server knows which network layer protocol to use to communicate with the client. The profiles assigned to a virtual server adds the intelligence necessary for it to process traffic up to a certain level. Therefore, in order for the virtual server to participate in a TCP 3-way handshake it needs to understand the TCP protocol. There are many different virtual server types and the features they support can vary a lot. You will have to choose the appropriate type based primarily on application requirements. We’ll cover each type shortly.

Wildcard Virtual Servers Sometimes the BIG-IP system needs to process traffic that is not specifically destined for itself; that is, traffic with a destination address that is not specifically configured as a listener on the system (a NAT or virtual server for instance). The onward destination is usually one or more transparent devices such as firewalls, routers, caches or proxies. It is routed through the BIG-IP for convenience because of its central position in a network or because it is installed ‘in-line’ and provides the only possible route between two networks. It may also be used to provide resilience should a transparent device fail and of course for load balancing purposes. Since the BIG-IP system is a default deny device, a listener is required to permit and process this traffic. For this we create what is known as a wildcard virtual server, which is configured with a network IP address of 0.0.0.0 instead of a host IP address When configured, if the BIG-IP system does not find a specific virtual server or NAT that matches the destination IP address of traffic it receives, it will try to match it with any wildcard virtual servers configured. A Port-Specific Wildcard Virtual Server will take precedence over a Default Wildcard Virtual Server. The BIG-IP will process this traffic based on the type of virtual server you have configured. The type of virtual server used will differ based on your requirements and design. If you simply want to forward traffic to an adjacent device such as a firewall or router, you will most likely use a Forwarding IP or Forwarding L2 virtual server. If you are creating what is known as a firewall sandwich, where you have 2 or more firewalls you want to load balance across and provide redundancy for, you will most likely use a Performance L4 virtual server. This gives you the ability to create one or more pools containing the firewall nodes which allows you to monitor the firewall’s health and load balance traffic between them. In contrast to most other virtual server types, the wildcard virtual server usually does not modify the destination address. Even when a pool is being used. That is because traffic should find its way back based on the routing information in the environment.

132 132


Default Wildcard Virtual Servers A default wildcard virtual server is a wildcard virtual server that uses port 0 and handles traffic for all services. The destination IP address and port are therefore: 0.0.0.0:0. A default wildcard virtual server usually listens on all VLANs by default, but this should be changed to only listen on specific VLANs. If configured with all VLANs enabled, you turn the BIG-IP into a router, allowing traffic to flow between all VLANs. When traffic is processed by a default wildcard virtual server enabled on all VLANs, it will accept the traffic and simply route it based on its routing table, essentially removing all security features by reversing its default deny security posture to default accept. You can further limit this security exposure by restricting permitted source IPs, enabling packet filtering or adding firewall rules through the Advanced Firewall Manager (AFM) module.

Port-Specific Wildcard Virtual Servers A port-specific wildcard virtual server handles traffic for a particular service only, and you define this by using a service name or port number. For HTTPS, this will result in a destination IP address and port of: 0.0.0.0:443. The differences between the Port-Specific, Default and Non-Wildcard virtual servers are explained in the following diagram: In the following diagram, all wildcard virtual servers are configured as Forwarding IP, meaning the destination will not be contained in a pool member, but rather routed based on the BIG-IP’s routing table. This is the most common scenario.

133 133


Non-Wildcard Virtual Server – The client is trying to access the web server of 10.10.20.100:80. When this traffic arrives at the BIG-IP it will match this against the currently configured virtual servers. It finds a specific Non-Wildcard Virtual Server with the exact same IP address. Matching ends and the BIG-IP selects a pool member, initiates a new connection to the pool member and translates the destination IP address to the pool member’s.

Port-Specific Wildcard Virtual Server – This time the client is trying to access a HTTP based resource with the IP address of 143.45.12.32 on port 80. Again, the BIG-IP will try to match the destination IP address against the virtual server list. It does not find a non-wildcard virtual server. But, instead it matches a PortSpecific Wildcard Virtual Server because the client is trying to access the resource on port 80. Depending on the virtual server type, the traffic will either be forwarded to one of the pool members within the pool (Performance Layer 4) or forwarded directly to the IP address, using the BIG-IP’s routing table to determine the next-hop address (Forwarding IP/L2).

134 134


Default Wildcard Virtual Server – The client is trying to establish a connection to a FTP server on the Internet with the IP address of 212.181.76.18 on port 21. This time there are no Non-Wildcard Virtual Server nor any Port-Specific Wildcard Virtual Server to match against this traffic. Instead, the only remaining option is a Default Wildcard Virtual Server which listens on all ports. Depending on the virtual server type, the traffic will either be forwarded to one of the pool members within the pool (Performance Layer 4) or forwarded directly to the IP address, using the BIG-IP’s routing table to determine the next-hop address (Forwarding IP/L2).

Local Traffic Objects Dependencies All of the local traffic objects are clearly linked together and have a certain dependency of each other. In the following diagram you can follow a client’s traffic from the client side all the way down to the node on the server side.

It may seem wasteful to use this many local traffic objects with a single virtual server, but don’t forget many of these objects can be re-used for others.

135 135


In some scenarios traffic objects are not linked so directly, for instance it is possible to load balance traffic to a pool using an iRule (we’ll get to this later on in the book). However, it is very important to understand the hierarchy between nodes, pools members, pools and virtual servers. This is illustrated in the following diagram:

The Different Types of Virtual Servers The BIG-IP LTM offers many different virtual server types. There are special purpose virtual servers, general purpose virtual servers and some work only up to a certain layer of the OSI model. Therefore, it is very important to know how an application works and to choose a type that handles the traffic appropriately. In the next section we’ll cover all of the different types and explain how they work. The Full Proxy Architecture is only utilised for particular virtual servers. The reason for this is that for some applications, the full proxy architecture actually breaks the application. Therefore, depending on the application, you will have to choose the correct virtual server type.

136 136


Standard Virtual Server The Standard Virtual Server is used for most common and general purposes. As you read in the Introduction, the BIGIP device uses a Full Proxy Architecture which means that the BIG-IP device appears as TCP peer to both the client and the server by having two separate connections. A standard virtual server requires either a TCP, UDP or SCTP protocol profile, and you can also apply a layer 7 profile such as HTTP, FTP or SSL if you would like to process traffic beyond layer 4. The connection setup will be different depending on if you process traffic up to layer 4 or layer 7 which we discuss in the following sections.

Connection Setup with a Standard Virtual Server Using Only a Layer 4 Profile When a Standard virtual server is configured with only a layer 4 profile assigned, in our case a TCP profile, the BIG-IP device will first establish a connection (via a TCP 3-Way handshake) with the client before initiating a connection with the server. The BIG-IP utilises a full proxy architecture and is demonstrated in the following diagram:

137 137


Connection Setup with a Standard Virtual Server Using a Layer 7 Profile As mentioned previously, you can assign layer 7 profiles to a virtual server. This includes HTTP, FTP and SSL profiles. When the Standard virtual server is also configured with a layer 7 profile, the connection setup looks a bit different. The BIG-IP device will first establish a connection with the client just like with a TCP profile, however, it will also require the client to send at least one application data packet before it initiates a connection with the server. This is demonstrated in the following diagram:

138 138


The BIG-IP LTM system may initiate the server-side connection prior to the first data packet for certain Layer 7 applications, such as FTP. This is because with FTP, the user waits for a greeting banner before sending any data.

Performance Layer 4 Virtual Server A Performance Layer4 virtual server is configured with a Fast L4 Profile which usually means it is configured to use the on-board ePVA FPGA chip (hardware) that helps accelerate traffic through the BIG-IP device. On VEs this is done in software but is still significantly faster than a standard L7 Virtual Server. The FastL4 profile essentially provides the original (first generation load balancer) packet-based (packet-by-packet) layer-four transparent forwarding half-proxy functionality used prior to TMOS and LTM v9.0. The Fast L4 Profile can use one of the following PVA Acceleration modes, full or assisted. You can, however, if you want, turn off the PVA Acceleration by choosing the mode none. By default, the FastL4 Profile will enable the PVA Acceleration Chip. The Virtual Server types that use this profile are Performance L4, Forwarding L2 and Forwarding IP. This means that these virtual servers will, by default, use the PVA chip.

The Virtual Editions (VE) of the BIG-IP device do not have a PVA Acceleration Chip and can, therefore, not use this to accelerate traffic.

If you are performing a packet capture on a Performance Layer4, Forwarding Layer 2 or Forwarding IP Virtual Server, you will have to turn off the PVA Acceleration in order to capture all of the packets that are passing through the BIG-IP device.

Connection Setup with a Performance Layer 4 Virtual Server When configured as a Performance Layer 4 Virtual Server, traffic processing occurs on a packet-by-packet basic and operates as follows; 1. 2. 3.

A client sends a SYN packet to the Virtual Server and it load balances this packet to one of the pool members As this Virtual Server works on a packet-by-packet basis, the BIG-IP device simply forwards the packet to the pool member (performing NAT and PAT as necessary) When the pool member replies, the response is simply forwarded back to the client (again performing NAT and PAT as necessary)

In other words, the BIG-IP device operates in a half-proxy state which means that it does not establish or get directly involved in TCP connections (like it does in a Full Proxy Architecture), it simply forwards packets like a router. However, do remember that the traffic is still stateful. This is demonstrated in the following diagram:

139 139


140 140


Performance HTTP Virtual Server A Performance HTTP virtual server is automatically assigned a Fast HTTP profile. The Fast HTTP profile is a scaled down version of the HTTP profile and when the Performance HTTP virtual server is combined with this profile, it will reduce the amount of connections to the back-end HTTP servers and increase the performance. However, this only applies to certain types of traffic. For instance, when you are load-balancing internet-based traffic, F5 recommends the regular HTTP profile.

The Fast HTTP Profile The Fast HTTP profile is appropriate for the following traffic conditions: ▪ ▪ ▪ ▪

The traffic is generated by well-behaved clients and servers. The traffic is using protocol headers which are contained within a single packet. The traffic is being produced by load generators. The traffic contains few network problems, such as dropped or out-of-order packets.

Advantages of the Fast HTTP Profile Using the Fast HTTP profile will provide you with the following advantages: ▪

▪ ▪

Optimisation for certain traffic – The Fast HTTP profile combines several features from the TCP, HTTP and OneConnect profile and creates a single profile with all of the features that optimises the network performance. Low CPU utilisation - The Fast HTTP profile is designed to reduce system CPU usage. Low Latency - Due to the optimisation features, you can expect a low latency. OneConnect is a feature that minimises server-side connections by re-using previously established connections for subsequent client requests. Rather than closing an idle connection to a real server (Pool Member) and reopening a new one for the next client request that gets load balanced to that server, the connection is maintained and re-used, within user configurable limits.

Limitations of the Fast HTTP Profile When it comes to the Fast HTTP profile, the limitations exceed the advantages: ▪

Requirement of Source Address Translation (SNAT) - When using the Fast HTTP profile, you will be required to translate the client source IP address using what is known as SNAT. This could mean that you will be limited to 65,536 connections (this can theoretically be more than 65,535 connections as long as each socket pair is unique).

141 141


The Fast HTTP Profile is not compatible with the following features: o PVA acceleration o Virtual server authentication o State mirroring o HTTP pipelining o TCP optimisations o IPv6 support o SSL offload o Compression o Caching

The Fast HTTP Profile will only support insertion of static text HTTP headers - This is considered to be an unnecessary performance hit for the Fast HTTP profile. If you need to perform iRule variable extension for the HTTP Header Insert field, you need to use a standard HTTP profile.

Limited iRule Support - As we mentioned in the previous point, there are iRule restrictions for the Fast HTTP profile. The Fast HTTP profile will only have iRule support for L4, a subset of HTTP headers and pool/pool member selection.

Out-of-order packets will be dropped - The Fast HTTP profile will drop TCP packets that are received out of order and that contains HTTP headers. This is because when you are using a Fast HTTP profile, the BIG-IP system needs to read the HTTP headers in the proper order.

The Fast HTTP Profile accomplishes its performance by operating on a packet-by-packet basis, meaning that it does not operate by using the Full Proxy Architecture. It combines this packet-by-packet operation with SNAT and OneConnect. The profile is also useful for clients still using HTTP 1.0. As you might already know, HTTP 1.0 do not use Keep-Alive headers, meaning that each connection will be closed once the client has received the object it requested. It does this by sending a Connection: close after each request. With the OneConnect feature, the FastHTTP profile will change this header to Xonnection: close, which will keep the connection open to the server.

Connection Setup with a Performance HTTP Virtual Server The first connection to the Performance HTTP virtual server will be intercepted by TMM in order to examine the HTTP headers contained in the packet and process iRules (if this is configured). However, the following packets will be instead handled directly in hardware. When a client makes a connection to a Performance HTTP virtual server, if there is an existing server-side connection to the pool member that is currently in an idle state, the BIG-IP device will mark it as non-idle and send the client request over that connection. This improves performance as the overhead of establishing a new connection is avoided.

Performance HTTP Virtual Server With an Existing Idle Server-Side Connection In the following example, a client establishes a connection to the BIG-IP device and there is currently a connection to a pool member which is in an idle state. Therefore, the BIG-IP device reuses this connection and sends the client’s request through to the pool-member using the existing connection. This is demonstrated in the following diagram.

142 142


If there are no idle server-side connection, the BIG-IP system will create a new connection and send the request over this connection. This is demonstrated in the following diagram:

143 143


144 144


Forwarding IP Virtual Server The Forwarding IP virtual server also uses the FastL4 profile, which means that it can also utilise the PVA Acceleration Chip. This type will simply forward packets directly to the next hop IP address specified in the request which is why you do not assign a default pool like you do with a FastL4 virtual server. The Forwarding IP virtual server also operates on a packet-by-packet basis. This means that the virtual server will handle the traffic just like a router. However, traffic will still be stateful.

Connection Setup with a Forwarding IP Virtual Server The first SYN packet that is sent from the client to the BIG-IP device will simply be forwarded to the IP address or network that is configured on the Virtual Server. Assuming the node responds to that packet, the BIG-IP device will simply forward it back to the clienst. This is demonstrated in the following diagram:

145 145


Forwarding Layer 2 Virtual Server The Forwarding Layer 2 virtual server also uses the FastL4 profile, which means that it can utilise the ePVA Acceleration Chip. This virtual server will simply forward packets based on the destination layer 2 MAC address. This means that the virtual server does not have a Default Pool assigned. The virtual server shares the same IP address as a node in the corresponding VLAN. Therefore, you will have to define a VLAN group that includes the VLAN where the node resides prior to creating the Forwarding Layer 2 Virtual Server. It is also important to disable the virtual server on the VLAN where the node resides in order to avoid IP address conflicts. When a client sends the initial SYN request to the IP address the virtual server is mimicking, the LTM passes it to the node on the associated VLAN based on the routing decision. The source MAC address is preserved and the destination MAC changed based on routing. The Forwarding Layer 2 virtual server also operates in a packet-by-packet behaviour.

Connection Setup with a Forwarding Layer 2 Virtual Server The first SYN packet that is sent from the client to the BIG-IP device will simply be forwarded on to the node that resides in the configured VLAN. The node will respond to that packet and the BIG-IP device will forward it back to the client. This is demonstrated in the following diagram:

146 146


147 147


Reject Virtual Server A Reject Virtual Server will immediately reject all IP address traffic that is destined to it. When the BIG-IP device receives a SYN packet from a client that matches the Reject Virtual Server, the BIG-IP device will close the connection and send back a TCP reset to the client. This is demonstrated in the following diagram:

Some people confuse the Reject Virtual Server with how some firewalls handle requests. For some firewalls, a Reject rule takes precedence over all other rules. This does not apply to the Reject Virtual Server. The order of precedence is exactly the same as with any other virtual server and that is, the most specific virtual server will be the one to receive the traffic. This is important to remember.

DHCP Relay Virtual Server The function of the DHCP Relay Virtual Server is, as the name implies, relaying DHCP traffic. Using this virtual server, the BIG-IP device will listen for DHCP broadcast messages on the client/source subnet and then relay this traffic (as a unicast) to the configured DHCP server(s) on a different subnet. Once the DHCP server has received the traffic, it will then reply back to the MAC address of the BIG-IP device which will, in turn, respond back to the client. This Virtual Server helps with the common problem of having multiple subnets but only one (or more) DHCP server(s) on just one subnet.

148 148


Stateless Virtual Server This special purpose virtual server is used to address the requirements of one-way UDP traffic. This type will accept traffic matching its own IP address and load balance to pool members without bothering to match packets to preexisting connections. Any new connection that is created will be immediately removed from the connection table. This virtual server is useful for non-stateful applications that require very high throughput. One example would be DNS traffic. You configure a virtual server to listen to DNS traffic and load balance these requests to a pool of DNS servers. The DNS servers do not use the BIG-IP system as their default gateway and they can therefore respond directly back to the clients, thus eliminating the need for the BIG-IP system to keep track of the connection.

Internal Virtual Server Internal Virtual Servers were introduced in BIG-IP version 11.3. As the name implies, these are used for internal purposes. One common use of an internal virtual server is to send traffic to ICAP servers in order for content to be scanned for viruses. ICAP stands for Internet Content Adaption Protocol and is a lightweight HTTP like protocol that adds additional features to the HTTP protocol. The client transactions that pass through the device will be sent to ICAP servers that can have specific functions. One of the most common is virus scanning but it can also be content translation, language translation or content filtering. For instance, a client is requesting a specific file that passes through a specific device. This file will then be sent to an ICAP server where it will be scanned for malicious code. If it passes the check, it will be forwarded to the client. If it fails, then an error can be sent to the client.

Other uses include: ▪ ▪ ▪

Advertising insertion or content transformation. Video adaptation and optimisation. Web content filtering.

Internal virtual servers are used by other configured standard virtual servers and this is by design. This is achieved by creating either a Request Adapt profile (client requests) or a Response Adapt profile (server responses). In the request/response adapt profile, you specify the Internal virtual server that you have created before-hand. This is specified in the following diagram:

149 149


Once the adapt profile has been created, it needs to be assigned to the standard virtual server.

Remember that if you would like an internal virtual server to process ICAP traffic, you will need to assign an ICAP profile. The client will establish a connection to the standard virtual server. The standard virtual server will send the traffic to the internal virtual server by using the request adapt profile. The internal virtual server sends the request to its pool members and receives a reply back with the verdict. The verdict in an ICAP response is the results of the anti-virus scan. The verdict could, for instance, state: verdict=virus, which indicates that the file is infected.

This reply (containing the verdict) is then forwarded back to the standard virtual server. If preferred, it is possible to configure persistence on the internal virtual server making sure that the same pool member is used for subsequent requests.

150 150


Message Routing Virtual Server This virtual server is used to load balance SIP traffic using a SIP application profile in accordance with a SIP session profile and a SIP routing profile. SIP stands for Session Initiation Protocol and is a protocol designed to handle multimedia communication sessions. Some examples are voice and video calls as well as instant messaging.

Chapter Summary ▪

The BIG-IP system is a default deny device, meaning that it will only accept traffic if it is configured to do so.

In order for the BIG-IP system to process traffic, it needs to be configured with listeners. Virtual Servers are one type of listener.

Nodes are objects that represent the real servers or other hosts on your network. Nodes are only represented by an IP address.

Pool members are conceptually the actual application service that you load balance traffic to. Pool members are nodes and an associated service port (TCP or UDP listening port) that are added to a pool and have traffic load balanced across them.

A Pool is very similar to a server farm or cluster. It is a logical object that contains one or more pool members across which traffic is load balanced.

Virtual servers are made up of a virtual IP address and a service port on which the BIG-IP systems listens and receives traffic.

151 151


Chapter Review 1. How will the BIG-IP device handle the connection setup when the virtual server is configured as Standard Virtual Server with layer 7 functionality? a. b. c. d.

It will await the first data packet from the client before establishing a connection to the pool member. It will establish a connection with the pool member when the client-side TCP-Three-Way handshake is complete. It will simply forward all packets to the pool member. The client will be redirected to the pool member directly.

2. What virtual servers are by default utilising the PVA Acceleration Chip? a. b. c. d. e. f.

The Standard Virtual Server. The DHCP Relay Virtual Server. The Performance Layer 4 Virtual Server. The Performance HTTP Virtual Server. The Forwarding IP Virtual Server. The Forwarding Layer 2 Virtual Server.

3. When a client is trying to establish a connection with a Reject Virtual Server, how will the BIG-IP system handle the request? a. b. c. d.

The BIG-IP will silently drop the request. The BIG-IP will send back a HTTP Response containing a 401 Access Denied. The BIG-IP will send back a RST packet to the client. The BIG-IP will send back a HTTP Response containing a 403 Forbidden.

4. What setting do you have to re-configure by default in order to capture all packets when running a packet capture on a Performance Layer 4 virtual server? a. b. c. d.

You cannot perform packet captures on a Performance Layer 4 virtual server. Change the PVA Acceleration Mode to None. Add the tcpdump flag -s0 in order to capture all packets. Set the Source Address Translation setting to Automap.

152 152


153 153


Chapter Review: Answers 1. How will the BIG-IP device handle the connection setup when the virtual server is configured as Standard Virtual Server with layer 7 functionality? a. b. c. d.

It will await the first data packet from the client before establishing a connection to the pool member. It will establish a connection with the pool member when the client-side TCP-Three-Way handshake is complete. It will simply forward all packets to the pool member. The client will be redirected to the pool member directly.

The correct answer is: a The BIG-IP device will first establish a connection with the client just like with a TCP profile, however, it will also require the client to send at least one application data packet before it initiates a connection with the server. 2. What virtual servers are by default utilising the PVA Acceleration Chip? a. b. c. d. e. f.

The Standard Virtual Server. The DHCP Relay Virtual Server. The Performance Layer 4 Virtual Server. The Performance HTTP Virtual Server. The Forwarding IP Virtual Server. The Forwarding Layer 2 Virtual Server.

The correct answers are: c, e and f By default, the Fast L4 Profile will enable the PVA Acceleration Chip. The Virtual Server types that use this profile are Performance L4, Forwarding L2 and Forwarding IP. This means that these virtual servers will, by default, use the PVA chip. 3. When a client is trying to establish a connection with a Reject Virtual Server, how will the BIG-IP system handle the request? a. b. c. d.

The BIG-IP will silently drop the request. The BIG-IP will send back a HTTP Response containing a 401 Access Denied. The BIG-IP will send back a RST packet to the client. The BIG-IP will send back a HTTP Response containing a 403 Forbidden.

The correct answer is: c A Reject Virtual Server will immediately reject all traffic that is destined to it. When the BIG-IP device receives a SYN packet from a client that matches the Reject Virtual Server the BIG-IP device will close the connection and send back a TCP RST to the client.

154 154


4. What setting do you have to re-configure by default in order to capture all packets when running a packet capture on a Performance Layer 4 virtual server? a. b. c. d.

You cannot perform packet captures on a Performance Layer 4 virtual server. Change the PVA Acceleration Mode to None. Add the tcpdump flag -s0 in order to capture all packets. Set the Source Address Translation setting to Automap.

The correct answer is: b If you are performing a packet capture on a Performance Layer4, Forwarding Layer 2 or Forwarding IP Virtual Server, you will have to turn off the PVA Acceleration in order to capture all of the packets that are passing through the BIG-IP device.

155 155


6. Load Balancing Methods The BIG-IP system offers many different load balancing methods, and choosing the right one can sometimes be challenging. A Load Balancing method dictates how traffic and connections are distributed across Pool Members and is therefore configured at the Pool level. There are two different high-order types of load balancing methods; ▪ ▪

Static Dynamic

Later, we’ll also discuss Priority Group Activation, which gives you the ability to create multiple prioritised ‘groups’ of pool members for redundancy. The primary group is used during normal operations and if it’s active members fall below a configured threshold, secondary groups will activate until the minimum number of pool members are reached. Lastly, we’ll cover the Fallback Host which is used when no pool members are available to serve a client’s request.

Member vs. Node When deciding which load balancing method you would like to use for a pool, there is a further fundamental concept that you will need to consider. Most load balancing algorithms can operate at either the pool member or node level. When operating at the pool member level, the metrics (e.g. connections and sessions) used for dynamic load balancing decisions are only considered in the context of that pool. If any of the nodes that are assigned to the pool (thus becoming pool members) are also assigned to another pool, the metrics related to that pool are ignored. For instance, if server_one is part of http_pool, serving HTTP, it may have 20 active connections on port 80. However, if it is also part of ftp_pool serving FTP, it may have another 100 connections on port 21. In total it has 120 connections. If you load balance using Least Connections at the pool level for http_pool and the other servers in that pool has 50 connections which is more than server_one. This means that server_one will receive the next 30 new connections since it only has 20 active HTTP connections. It will not consider the other 100 FTP connections the node has in the ftp_pool.

This is the default behaviour.

If you instead load balance at the node level, the FTP connections are taken into consideration, which creates a more even load. Since an end-server can host multiple services, the number of connections that each server has can differ drastically. For instance, you have 3 servers that are hosting both HTTP and FTP services. You have created two different pools, one for FTP and one for HTTP. When you are load balancing traffic to these servers, do you want to make load balancing decisions based on a pool member (service) or the node? We explain this in the following example using the load balancing method Least Connections:

156 156


If we base our load balancing decision on the pool member, then server 1 will receive the next HTTP request since it currently only has 19 active connections. However, if we base our load balancing decision on the node, then server 3 will receive the next HTTP request because it has 23 active HTTP connections and only 3 active FTP connections. This means that this is the server with the least connections in total. Another example where this is critical is the Ratio load balancing method. Since you can specify ratio on both pool member level and the node level, you will have to be very clear on how you want to configure your BIG-IP device. Remember that the member’s ratio is configured together with the pool while the node ratio configuration is a global setting and might affect many different pools. Keep this in mind when configuring your load balancing method.

Static Load-Balancing Static Load-Balancing methods distribute traffic based on a pre-defined pattern and do not take end-server performance (or any other metric) into consideration. However, the BIG-IP device will still use health monitor data when making a load balancing decision and a node/pool member which is marked as offline will not receive any traffic.

157 157


There are two static load balancing methods available: â–Ş â–Ş

Round Robin Ratio (or Weighted Round Robin)

Round Robin This load balancing method is very simplistic in its nature and distributes connections evenly between all available pool members. This load balancing method is suitable if all servers have equal performance. The following diagram shows this method in operation:

Ratio This load balancing method is also called Weighted Round Robin and it distributes connections based on a userdefined ratio. This can be useful when the servers have different performance capabilities. The load balancing method uses the ratio to load balance the connections in an unequally circular round robin fashion. The higher ratio, the more connections a server will receive. To give you an example of this, if your pool has two fast servers and two slow servers, you could assign the fast servers a ratio of 3, which means they will receive 3 times the traffic of the other servers. The ratio would end up looking like this, 3:3:1:1.

The following diagram shows this method in operation:

158 158


The ratio setting can be configured under the pool member or node level. This means that when the pool makes a load balancing decision, it will either look at the amount of connections the pool members have (service/port level) or the number of the connections the nodes have (all services running on that node). This is because a node can be a pool member in multiple pools and you might want to consider the load of the entire server rather than just the service you are load balancing. The Ratio Load Balancing method can also be based on sessions which is called Ratio (Session). We discuss Ratio (Session) later in this chapter.

In order for ratio to work, you will need to configure ratio values either on the pool member or the node. If the ratio value is left at its default value of 1, the ratio load balancing method will operate in a round robin fashion. In the following picture, you can see where ratio is configured under the pool member.

159 159


And in the following picture, you can see where it is configured under the node level:

160 160


Dynamic Load-Balancing The Dynamic Load-Balancing methods distribute traffic to each pool member based on the servers’ performance or some other metric(s). When each load balancing decision is made, the BIG-IP device will evaluate the current metric(s) in order to select a pool member. The BIG-IP device will use data that it has collected on its own. For example, its own connection table or persistence table. There is currently only one load balancing method that actively checks the performance of the server (CPU, Memory, HDD etc.) and that is Dynamic Ratio. The dynamic load balancing methods have a great advantage over static load balancing as they load balance traffic based on actual data rather than specified numbers. The advantage of this is that the load of the servers can be different during certain time periods. To give you an example, imagine that you are load balancing traffic to multiple FTP servers hosting different sized files. Some files are small, which takes a short amount of time to download, and some files are much larger, which takes a much longer time to download. Using round robin as a load balancing algorithm in this example might cause the same users requesting the same large file to end up on the same pool member, which will create a very uneven load since those connections takes a longer time to finish. When using a dynamic load balancing method like least connections, the BIG-IP device will be able to monitor the amount of connections each pool member/node has and make sure the load is even. The available dynamic load balancing methods are: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Least Connections Fastest Least Sessions Ratio Sessions Ratio Least Connections Weighted Least Connections Observed Predictive Dynamic Ratio

Least Connections This method distributes connections based on the current active connection count between the BIG-IP device and the end-servers. It does not take into account the connections the server might have with other systems. If all the servers have the same number of connections, the BIG-IP device will distribute connections in a round robin fashion. This load balancing method is useful for long-lived connections like FTP or SSH and is the one most commonly used. This load balancing method is also commonly used for HTTP/HTTPS as well but the connection count will differentiate a lot since HTTP is a short-lived application.

161 161


The following diagram shows this method in operation:

It is important to remember that it is only the active connections that count. Idle connections like when OneConnect is used is not taken into account when using this load balancing method.

Fastest This method is very similar to least connections. The difference is that fastest keeps track of the number of outstanding layer 7 requests. An outstanding request is one where the BIG-IP has sent an application layer protocol request to the server but has yet to receive a response back. The BIG-IP device is waiting for the server to respond and keeps track of all of these connections. When the BIG-IP device receives a new request, the node or pool member with the least outstanding layer 7 connections will receive the request. If the Virtual Server does not have a TCP and a layer 7 profile configured, the BIG-IP device will automatically fall back to the Least Connection load balancing method. The advantage of this load balancing method is that servers might have different response times depending on the load that a previous request has generated. Just like the Least Connection load balancing algorithm, the Fastest load balancing algorithm can load balance based on the pool member level which is referred to as Fastest (Application) or the node level which is referred to as Fastest (Node).

162 162


This is used in order to load balance traffic based on the load of the entire end-server or just the service that is running on the end-server. For this load balancing algorithm, F5 has chosen to bypass the standard naming convention and replaced Member with Application instead.

Least Sessions This method distributes connections based on the number of persistence records that are stored in the persistence table. When a new connection is sent to the BIG-IP system, the persistence table is consulted for this information at the time the load balancing decision is made. The persistence method used must be one that stores data in the persistence table (i.e not cookie insert). The server with the least persistence records will receive the traffic. If the Virtual Server is configured to use cookie persistence then the BIG-IP device will automatically fall back to the round robin load balancing method. This load balancing method cannot be modified to make load balancing decisions based upon pool member and node level like the least connections can. This load balancing algorithm will simply make load balancing decisions based on the persistence records stored in the persistence table.

Ratio Sessions This method is a hybrid between the Ratio and the Least Session methods - it should probably be called Weighted Least Sessions. It operates very similarly to the Least Session method except that you need assign ratios to each server. In order to understand this load balancing method more easily, please refer to the following diagram:

163 163


If we review the data in the diagram, we can see that Server 1 has 10 persistence records and that Server 2 has 20 persistence records and that Server 3 has 25 persistence records. As we discussed in the Ratio section, the ratio adjusts the amount of connections each server will receive. In our case, Server 2 will receive twice as many connections as Server 1 and Server 3 will receive thrice as many connections as Server 1. We can currently see that Server 1 and Server 2 have the amount of connections that they need in order to fulfil the Ratio (Server 2 has twice as many connections as Server 1), but Server 3 does not have thrice as many connections as Server 1. Therefore, the next incoming request (and the four after that) will be sent to Server 3.

This will continue until the Current Persistence Records are equal to the Ratio. Therefore, the Server 3 will keep on receiving connections until the currently active Persistence records reach 30. After that, Server 1 will receive new connections again. Think of this load balancing method as the standard Ratio load balancing, except that it instead uses the records in the persistence table rather than the ones in the connection table. This load balancing method is suitable for use if you have servers with dissimilar resource capabilities. As with the Least Sessions method, the persistence method used must be one that stores data in the persistence table (i.e not cookie insert).

164 164


Ratio Least Connections This method is another hybrid, this time between Ratio and Least Connections. Using ratios, you build up different tiers of pool members where one or several tiers receive more connections than the other. After the BIG-IP has determined which Ratio tier that will receive new connections, it will look at the current connections within that tier. Please refer to the following diagram for further clarification:

In the diagram, we can see that Ratio tier 1 has the number of connections it needs to fulfil the Ratio. Ratio tier 2, however, does not fulfil it, as it does not have twice as many connections as ratio tier 1. Therefore, a new incoming connection will be sent to a server in ratio tier 2. Once the incoming connection has been sent to ratio tier 2, the BIG-IP device will review the current connection count within that tier. We can see that Server 4 has fewer connections than Server 3, so the BIG-IP device will send the new incoming request to Server 4.

165 165


As with any other Dynamic Load Balancing method, if the performance of the servers changes (for example connection count or the number of persistence records), the BIG-IP device will know this and when a new connection arrives at the BIG-IP system, it will load balance the request based upon this new data. For instance, in the next round, we can see that Server 1 has decreased its current connections to 5. Therefore, when a new incoming connection reaches the BIG-IP device, it will instead send the traffic to Ratio tier 1 as Ratio tier 2 has more than twice the connections than Ratio tier 1.

166 166


Again, the BIG-IP device examines the current active connections within tier 1 and notices that Server 1 has fewer connections than Server 2 so, the BIG-IP device sends the request to Server 1.

167 167


In order for this load balancing method to work, you will need to configure a Ratio value on either the pool member or node which we covered earlier in the chapter. Just like with the Ratio method, this one is suitable for use when the servers have dissimilar performance.

Weighted Least Connections This method distributes connections based on the lowest percentage of each pool member’s connection capacity. This capacity is based on the pool member’s current connection count compared to its configured maximum connection limit. This means that you must configure a connection limit on all pool members in the pool or node in question. If a pool member or node has reached its connection limit, the BIG-IP device will mark it as Unavailable and it will not receive any new connections. As an example, if the pool member Server 1 has a current connection count of 30 and has a maximum connection limit of 100, the current capacity of the pool member is 30%. If the pool member Server 2 has a current connection count of 30 and has a maximum connection limit of 200, the current capacity of the pool member is 15%. New connections are sent to the server with the lowest capacity percentage. This load balancing method can make its decision based upon the pool member or node level. The following diagram shows an example of the weighted least connections load balancing method:

168 168


Observed This method distributes connections based on the number of current active layer 4 connections each pool member or node has. Every second, the BIG-IP looks at the current active connection count and assigns a ratio to each pool member or node. The pool member or node with the least amount of connections will receive a higher ratio, and the one with the most connections will receive a lower ratio. When the BIG-IP system receives a new connection, it will load balance traffic based upon these ratio values. This load balancing method can make load balancing decisions based upon the pool member or node level.

Predictive This method is very similar to the observed load balancing method. However, instead of looking at the current active connections, it compares the current active connection count with the previous active connection count. This is also known as the delta. If the BIG-IP system detects that a pool member has received an increase in current active connections compared to the previous second (trend is going up), it will decrease the ratio. If the pool member has instead received a decrease in current active connections (trend is going down) the BIG-IP will increase the ratio. The pool members that have stayed in between these values will keep the same ratio. When the BIG-IP system receives a new connection, it will load balance traffic based upon these ratio values. This load balancing method can make load balancing decisions based upon the pool member or node level.

Dynamic Ratio The dynamic ratio load balancing method distributes connections based on information the BIG-IP has gathered from the server itself. In order to gather this data, we need to assign a Performance Monitor on the node. In order to gather the information from the end-servers, you will need to install specific software with which the BIG-IP system communicates by using the Performance Monitor. Presently, these are the software platforms that are supported: ▪ ▪ ▪

RealNetworks® RealSystem® Server platforms Windows platforms using Windows Management Instrumentation (WMI) Any server running an appropriate SNMP agent

For instance, you can configure the end-server to run an SNMP agent that the BIG-IP server communicates with and retrieves the data. Once this is in place you will need to assign a Performance Monitor on the node configured on the BIG-IP. In order to retrieve the data from the SNMP agent you will need to assign an SNMP Performance Monitor and these are called snmp_dca or snmp_dca_base. The difference between snmp_dca and snmp_dca_base is that snmp_dca comes with predefined variables to collect SNMP information (OIDs) for CPU, memory, disk usage etc. whereas snmp_dca_base has no predefined SNMP variables.

This monitor will then gather the relevant information from the server and assign each node/pool member a certain weight. The higher the weight, the more connections it will receive.

169 169


In order to calculate the weight, the BIG-IP system uses the following equation: (Number of Nodes in Pool)^(Mem Coefficient((Mem Threshold - Mem Utilisation)/MemThreshold)) + (Number of Nodes in Pool)^(CPU Coefficient((CPU Threshold - CPU Utilisation)/CPU Threshold)) + (Number of Nodes in Pool)^(Disk Coefficient((Disk Threshold - Disk Utilisation)/Disk Threshold) Each software platform has their own Performance Monitor that needs to be assigned when you want to load balance using the Dynamic Ratio method. The collection of Performance Monitors is displayed below:

Software Platform RealNetworks RealSystem Server Windows Management Instrumentation (WMI) SNMP

Performance Monitor real_server wmi snmp_dca or snmp_dca_base

This load balancing method can make its load balancing decisions based upon the pool member or node level.

Priority Group Activation Priority Group Activation gives you the ability to create multiple prioritised ‘groups’ of pool members for redundancy. The primary group is used during normal operations and if the primary group’s active members fall below a configured threshold, secondary groups will activate until the minimum amount of pool members are reached. There are two configuration parameters that must be considered when configuring this feature: What priority group each pool member is part of and the minimum number of active pool members. First, you will have to assign a priority value to each pool member (thus assigning them to a priority group). The default is 0 which is the lowest priority group. The group with the highest number has the highest priority and is the primary group. The second thing you will have to configure is the minimum number of pool members that must be active before any secondary priority group is activated. Existing connections to members of a priority group are not moved when a lower priority group is activated but no new connections are sent to that pool’s members until its minimum number of active members is achieved. The reverse is also true; when a higher priority group is activated, new connections are sent to it but any existing connections to members of the lower priority group are not moved. For this reason, connections can rapidly and unintentionally become spread over members of two or more groups if pool members are rapidly going up and down; even more so if long lived connections are in use.

To provide you with an example, in the following diagram you can see that if the priority group has less than 2 available pool members, the next priority group will be activated.

170 170


In our example, we have four servers. In the following diagram, you can see that Server 1 and Server 2 are in the same Priority Group with a priority value of 10. Since it has the highest value, it has the highest priority and is the primary group. The backup Priority Group has a value of 5, which Server 3 and Server 4 belong to.

Since both Server 1 and Server 2 are online and available, these are the only servers that are receiving connections and traffic at this point.

171 171


In the configuration above, if either Server 1 or Server 2 go offline, the Priority Group Activation configuration of less than 2 available pool members will be true which means that the next priority group will be activated. In this case it is priority group 5. When a priority group is activated, all of its members will be activated and receive new connections. If you have more than two priority groups, this process carries on until the Priority Group Activation criteria is met or until there are no more active members available in any group.

The backup priority group will be activated until the Priority Group Activation criteria has once again met its requirements. In this case, it will need to have at least two available members. So, this means that when Server 2 is once again online, the priority group 5 will be deactivated and become the backup group. The current active connections on Server 3 and Server 4 will be allowed to complete but no new connections will be sent to these servers when the requirements are met.

172 172


Priority Group Activation is by default turned off. Connections are distributed using the load balancing method configured for the pool. Priority Group Activation has several purposes. One purpose is that the primary group is configured with the content that is supposed to be delivered to the users and the backup group contains apology servers. An apology server or “sorry server� is a server that contains an apology to the user and information regarding the current status of the website. So, if a company is experiencing a partial outage, some users will be able to access the content while some users will be redirected to the apology servers. Unless you are managing persistence using iRules, you should never use persistence while having apology servers in the same pool. This is because the persistence record might redirect the users to the apology servers even after the servers have come online again. Another purpose of Priority Group Activation is segregation. Imagine having 6 servers and all servers are running both HTTP and FTP with the same content. You create the pool http_pool and assign all 6 servers to this pool since all of them have the same HTTP content. You would like to split the content of each servers so that the main purpose of the servers from 1 to 3 handles HTTP traffic and 4-6 handles FTP traffic. Therefore, you assign servers 1 to 3 to the Priority Group with the highest number, in our case, 10. The servers from 4 to 6 will be assigned to the backup Priority Group with a lower number, in our case 5. This means that servers from 1 to 3 will be the only ones that receive the HTTP traffic. But if any of the servers from 1 to 3 experience any problems, the servers from 4 to 6 will also assist on handling the HTTP traffic.

173 173


The same method will be used for the ftp_pool, except that servers, from 4 to 6 will be assigned to the Priority Group with the highest value and the servers from 1 to 3 will be assigned to the Priority Group with the lower value. That way, you will be able to have 6 identical servers, but you will be able to split their primary purpose. The following diagram describes this concept:

All servers are identical, they are both running HTTP and FTP services. However, we have decided to split the primary services among the servers. Server 1 to 3 will be responsible for HTTP traffic and server 4 to 6 for FTP traffic. We have created two pools, the http_pool and the ftp_pool. All servers are assigned to each pool, but they are contained in different Priority Groups. This causes the traffic to only be sent to the pool members in the primary group, but in case something happens to one of the servers, the backup group will be activated and assist with handling connections. In this following scenario, both server 2 and server 3 have gone offline and this has caused the backup priority group to be activated and start handling connections. This means that server 1, 4, 5 and 6 handle HTTP traffic. Nothing changes regarding the FTP traffic, servers 4-6 are still the only servers that handle this traffic.

174 174


Once Server 2 and Server 3 are back online again, the backup priority group within the number 5 will fall back to a standby mode, allowing the existing connections to complete.

FallBack Host So, what happens if there are no available pool members, meaning that there are no servers available to forward the client’s request to? If all pool members are down, then the pool is down. If the pool is down, the virtual server is down and the client will not receive a response. Instead of just making the requests time-out, we can configure the BIG-IP device to send back an HTTP redirect to an apology server. This function is known as FallBack Host and you configure this in the HTTP profile. An apology server will not solve the issue, but it will at least inform the clients about the outage and perhaps give an estimate on how long it takes before the problem is resolved. The apology server is not part of the BIG-IP system and is something that your development team needs to configure.

Lab Exercises: Load Balancing Before Starting with the Lab Exercises As we have mentioned earlier in the book, the lab environment consists of one Apache Tomcat Server running five virtual hosts that listens on different IP addresses and service ports. The web page is also built upon multiple different objects such as javascript files, CSS files and images. All of these objects need to be downloaded in order to build up the site that is being presented in your web browser.

175 175


Since each object will be retrieved through its own request, each request will also be load-balanced. Therefore, you might end up with a scenario where the index.html (the page that states which server you land on) is retrieved from the same server. This is working as intended and you can see each object being requested using a Firefox plugin called Firebug. This is already pre-installed on your lab client and is enabled by clicking on the icon symbolising the bug and clicking on the Net tab. You might have to enable the Net panel. Once you have done that, perform a refresh of the page to see all of the objects being requested.

This is just to inform you that even though you might land on the same server, the requests are being load balanced to each server which you will confirm during the following exercises.

176 176


Exercise 3.1 – Creating Virtual Servers and Pools Exercise Summary In this exercise, we’ll create load balancing pools and virtual servers. we’ll also associate the pools with the virtual servers and verify that traffic is passing through the BIG-IP system. In this lab we’ll perform the following: ▪ ▪ ▪

Create load balancing pools. Create virtual servers and associate them with the pools we create. Verify that traffic is passing through the BIG-IP system.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. One or more servers configured on the internal network to which we can load balance traffic. This should already be configured during the Building a Test Lab chapter.

▪ ▪

Creating a Pool 1. 2. 3.

Open up a browser session to https://192.168.1.245 and log in using the admin credentials. Navigate to Local Traffic > Pools and in the upper left corner click Create. On the Local Traffic > Pools: Pool List > New Pool… page, add the following configuration:

Local Traffic > Pools: Pool List > New Pool… Configuration Configuration Basic Name http_pool Resources Load Balancing Method Round Robin Enter the information for each pool member and click Add to add it to the pool member list. Do this for each individual pool member. Use the following information: Pool Member 1 Node Name: Blank, Address: 172.16.100.1 Service Port: 80 Pool Member 2 Node Name: Blank, Address: 172.16.100.2 Service Port: 80 Pool Member 3 Node Name: Blank, Address: 172.16.100.3 Service Port: 80 When done the list should look like the following: New Members R:1 P:0 C:0 172.16.100.1 172.16.100.1 :80 R:1 P:0 C:0 172.16.100.2 172.16.100.2 :80 R:1 P:0 C:0 172.16.100.3 172.16.100.3 :80 When done, click Finished 4.

Navigate to Local Traffic > Nodes and notice that the BIG-IP system has automatically created three nodes as a result of the pool you just created.

177 177


Creating a Virtual Server 1. 2.

Navigate to Local Traffic > Virtual Servers and in the upper left corner click Create. On the Local Traffic > Virtual Servers: Virtual Server List > New Virtual Server… page, add the following configuration:

Local Traffic > Virtual Servers: Virtual Server List > New Virtual Server… General Properties Name vs_http Type Standard Destination 10.10.1.100 Service Port 80 or select HTTP Resources Default Pool http_pool When done, click Finished Verifying Your Configuration Changes During this exercise, you will send application traffic through the BIG-IP system as a user and not as an administrator like you have done in the previous exercises. Throughout this book, you will have to change your role between being a BIG-IP administrator and a regular user. Throughout this book, you will be asked to “hard refresh” your browser session. We do this in order to prevent the web browser to cache any objects and always obtain the data from the BIG-IP system. In order to perform a hard refresh, hold down Ctrl and press F5. In other words, press Ctrl+F5. 1. 2.

3. 4. 5.

Verify that your virtual server and application is working by opening up a new browser session to http://10.10.1.100. Once you have access to the virtual server, perform some “hard refresh” of the site five to ten times in order to generate some traffic. Like mentioned earlier, you perform a hard refresh on most browsers by pressing Ctrl+F5. Verify that traffic is indeed sent through the BIG-IP system using the virtual server and its pool members. You do this by examining the statistics. Navigate to Statistics > Module Statistics > Local Traffic Once you are at the Statistics > Module Statistics: Local Traffic page, under Statistics Type, select Virtual Servers. View the statistics of the virtual server vs_http and answer the following questions:

Question Do you see incoming traffic from the client to the virtual server? Do you see outgoing traffic from the virtual server to the client?

178 178

Answer


6.

When you have answered the questions, change the Statistics Type from Virtual Servers to Pools. Expand the pool http_pool by clicking the + sign and answer the following questions:

Question Did each pool member receive traffic? Did each pool member manage approximately the same amount of connections? Verify using the Connections – Total. How many requests did pool member 172.16.100.1 receive? How many requests did pool member 172.16.100.2 receive? How many requests did pool member 172.16.100.3 receive? Would you say that the load balancing algorithm Round Robin worked the way it is supposed to?

Answer

Expected results You should be able to reach the virtual server and be presented with a web page. This should also be shown in the virtual server statistics. Since we have configured the pool to load balance traffic according to the Round Robin algorithm, each pool member should receive an equal amount of request. However, do note that this is directly connected to how many times you refreshed your web browser session. If you did not receive the expected results, please verify the following: Virtual Server ▪ ▪

Verify the statistics, did it get any traffic? Did it receive traffic but did not send a reply back? Verify the configuration of the virtual server. o Is it using port 80? o Is it configured with the correct destination address?

Pools ▪

Verify the statistics, was there any traffic sent to the pool members? If not, then verify that the http_pool is associated with the virtual server. Verify that the pool members are configured with the correct IP addresses. If traffic is being sent to the pool members that is not being returned, verify the self IP and VLAN configuration.

▪ ▪

Creating Another Pool and Virtual Server Now that you have created one functional application, let’s go ahead and create another. This time we’ll use the same IP address as vs_http but instead use a different port. In the previous exercise, you also created the pool before you created the virtual server but this time we’ll do it while we are creating the virtual server. 1. 2.

Navigate to the Local Traffic > Virtual Servers page and in the upper right corner press Create. On the Local Traffic > Virtual Servers: Virtual Server List > New Virtual Server… page, add the following configuration:

179 179


Local Traffic > Virtual Servers: Virtual Server List > New Virtual Server… General Properties Name

vs_https

Type

Standard

Destination

10.10.1.100

Service Port

443 or select HTTPS

Resources Default Pool Click the + sign (this will launch the New Pool page) Local Traffic > Pools : Pool List > New Pool... Configuration Name https_pool Load Balancing Method Ratio (member) New Members Click on the Node List button and use the pull-down menu and select the following members: Address: 172.16.100.1 Service Port: 443 Ratio: 1 Click Add Address: 172.16.100.2 Service Port: 443 Ratio: 2 Click Add Address: 172.16.100.3 Service Port: 443 Ratio: 3 Click Add When done, click Finished (This will return you back to the New Virtual Server Screen) Local Traffic > Virtual Servers: Virtual Server List > New Virtual Server… Resources Default Pool https_pool (This will be automatically selected) When done, click Finished

Verifying Your Configuration Changes Now we’ll verify that the new virtual server receives traffic by sending client traffic through the BIG-IP system. Again, you will have to change role from being the BIG-IP administrator to being the client trying to access the application. 1. 2.

3.

4. 5. 6.

Verify that your virtual server and application is working by opening up a new browser session to https://10.10.1.100. You will be prompted with a certificate error, but this is normal. This certificate is a self-signed certificate created on the Apache Server which will not be validated by the web browser. Accept the certificate, this will load the web page. Once you have loaded the web page, perform some “hard refresh” of the site five to ten times in order to generate some traffic. Like mentioned earlier, you perform a hard refresh on most browsers by pressing Ctrl+F5. Verify that traffic is indeed sent through the BIG-IP system using the virtual server and its pool members. You do this by examining the statistics. Navigate to Statistics > Module Statistics > Local Traffic Once you are at the Statistics > Module Statistics: Local Traffic page, under Statistics Type select Virtual Servers. View the statistics of the virtual server vs_https and answer the following questions:

Question Do you see incoming traffic from the client to the virtual server? Do you see outgoing traffic from the virtual server to the client? 7.

When you have answered the questions, change the Statistics Type from Virtual Servers to Pools. Expand the pool https_pool by clicking the + sign and answer the following questions:

180 180

Answer


Question Did each pool member receive traffic? Did each pool member manage approximately the same amount of connections? If not, why? Verify using the Total Connections. How many requests did pool member 172.16.100.1 receive? How many requests did pool member 172.16.100.2 receive? How many requests did pool member 172.16.100.3 receive? Would you say that the load balancing algorithm Ratio (member) worked the way it is supposed to?

Answer

Expected Results You should be able to reach the virtual server and be presented with a web page. This should also be shown in the virtual server statistics. Since we have configured the pool to load balance traffic according to the Ratio (member) algorithm, pool member 172.16.100.2 should receive twice (x2) as many connections as 172.16.100.1 and 172.16.100.3 would receive thrice (x3) as many connections as 172.16.100.1. However, do note that this is directly connected to how many times you refreshed your web browser session. In general, you should see results looking something like: ▪

172.16.100.1:80 has 30 connections

172.16.100.2:80 has 60 connections

172.16.100.3:80 has 90 connections

If you did not receive the expected results, please verify the following: Virtual Server ▪

Verify the statistics, did it get any traffic? Did it receive traffic but did not send a reply back?

Verify the configuration of the virtual server. o Is it using port 443? o Is it configured with the correct destination address?

Pools ▪

Verify the statistics, was there any traffic sent to the pool members? If not, then verify that the https_pool is associated with the virtual server.

Verify that the pool members are configured with the correct IP addresses.

If the traffic was load balanced according to the results above, verify that the Pool is configured to use the Load Balancing Method Ratio (member) under Local Traffic > Pools: Pool List > https_pool.

181 181


If the load balancing algorithm is indeed Ratio (member) then verify that each pool member has the correct Ratio under Local Traffic > Pools: Pool List > https_pool. If this is configured wrong, then you change this per pool member. Click on each pool member that has the incorrect value and change it.

If traffic is being sent to the pool members that is not being returned, verify the self IP and VLAN configuration.

Verify Your Configuration Changes Using tmsh You can also use tmsh in order to view statistics. 1. 2. 3.

Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. Log on using the account root and the password f5training. Enter tmsh by entering the command:

[root@bigip1:Active:Standalone] config # tmsh 4. 5.

You should now have entered tmos indicated by the (tmos)# prompt. In order to view the statistics of https_pool, please enter the following command:

admin@(bigip1)(cfg-sync Standalone)(Active)(/Common)(tmos) # show /ltm pool https_pool 6.

In order to view the statistics of vs_https, please enter the following command:

admin@(bigip1)(cfg-sync Standalone)(Active)(/Common)(tmos) # show /ltm virtual vs_https 7.

Log out from the terminal client by issuing the commands:

admin@(bigip1)(cfg-sync Standalone)(Active)(/Common)(tmos) # quit [root@bigip1:Active:Standalone] config # exit

Exercise 3.2 – Configuring Priority Group Activation Exercise Summary In this exercise, we’ll configure Priority Group Activation. we’ll perform the following: ▪ ▪

Enable Priority Group Activation for the http_pool Observe the behaviour.

182 182


Exercise Prerequisites Before you start this lab exercise, make sure you have the following: ▪

Network access to the BIG-IP system’s management port.

Have one or more servers configured on the internal network to which we can load balance traffic. This should already be configured during the Building a Test Lab chapter.

Created the http_pool and the associated vs_http from the previous exercises.

Configure Priority Group Activation 1. 2. 3.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Local Traffic > Pools and in the Pool List, click on http_pool. This will open up the configuration for the http_pool. Click on the Members tab and make sure the pool configuration is configured as follows:

Local Traffic > Pools : Pool List > http_pool Configuration Load Balancing Method Round Robin Priority Group Activation Less than … Available Member(s) 2 When done, click Update 4.

Once you have updated the configuration, click on the pool member 172.16.100.2 to enter its configuration page. 5. When you have entered the configuration of the pool member 172.16.100.2, change the Priority Group setting to 4 and click Update. 6. Go back to the members list by clicking the Members tab. 7. Once you are back at the http_pool member list, click on the pool member 172.16.100.3. 8. When you have entered the configuration of the pool member 172.16.100.3, change the Priority Group setting to 4 and click Update. 9. Go back to the members list by clicking the Members tab. 10. When you are done, the http_pool member list should be configured as follows:

Local Traffic > Pools : Pool List > http_pool Member Ratio 172.16.100.1 1 172.16.100.2 1 172.16.100.3 1

Priority Group 0 4 4

11. Before we go ahead and verify that the changes were successful, clear the statistics by going to Statistics > Module Statistics > Local Traffic. 12. On the Statistics > Module Statistics > Local Traffic page, select the Statistics Type: Pools. 13. On the upper left corner of the table, click on the Check Box to highlight all of the pools and pool members and click Reset on the bottom left corner of the table. 14. When the statistics have been reset, open up a new browser session to http://10.10.1.100

183 183


15. 16. 17. 18.

Refresh the page 5-10 times by pressing Ctrl+F5. Review the statistics once again. What are the results? Reset the statistics once again for pool http_pool. Navigate to Local Traffic > Pools > http_pool and click on the checkbox for pool member. 172.16.100.2 and click on Disable. 19. Head back to your browser session to http://10.10.1.100 and refresh 5-10 times. 20. View the pool statistics once again, what are the results? 21. Re-enable the pool member 172.16.100.2 by navigating to Local Traffic > Pools > http_pool and click on the checkbox for pool member. 172.16.100.2 and click on Enable.

Expected Results When you configure 172.16.100.2 and 172.16.100.3 with the Priority Group 4, they will be in the same priority group and since priority group 4 is the highest group, they will be the primary group to receive traffic. Since we are presently load balancing traffic using round robin, in the first attempt pool member 172.16.100.2 and 172.16.100.3 will receive an even amount of traffic. Remember this is dependent on how many times you refreshed the page. We configured priority group activation to have at least.. 2 members active. When we disabled pool member 172.16.100.2, there was only one member active in the priority group (172.16.100.3) which caused the next priority group to be activated. In this case, it was Priority Group 0 where 172.16.100.1 is a member. Therefore, in our second attempt we should see traffic going to pool member 172.16.100.1 and 172.16.100.3. Again, since we are load balancing traffic using round robin, we should see an equal number of requests depending on how many times you refreshed the page.

Exercise Clean-Up 1. 2. 3.

Navigate to Local Traffic > Pools and in the Pool List, click on http_pool. This will open up the configuration for the http_pool. Disable Priority Group Activation. Navigate to each pool member and configure the following settings:

Local Traffic > Pools : Pool List > http_pool Member Ratio 172.16.100.1 1 172.16.100.2 1 172.16.100.3 1

184 184

Priority Group 0 0 0


Exercise 3.3 – Configuring FallBack Host Exercise Summary In this exercise, we’ll configure a FallBack Host for the vs_http. In case all of the pool members go down, we’ll redirect clients to a different webpage. In this lab, we’ll perform the following: ▪ ▪ ▪

Create a custom HTTP profile and assign to vs_http. Disable all pool members in http_pool. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Have one or more servers configured on the internal network that we can load balance traffic to. This should already be configured during the Building a Test Lab chapter. Created the http_pool and the associated vs_http from the previous exercises.

▪ ▪ ▪

Configuring a FallBack Host 1. 2. 3.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Local Traffic > Profiles > Services > HTTP and in the upper right corner click Create. This will launch the New HTTP Profile page. On the Local Traffic > Profiles : Services : HTTP > New HTTP Profile… page, add the following configuration: The new profile will inherit its settings from the default profile. Break the inheritance by clicking the custom box for each setting that you would like to modify.

Local Traffic > Profiles : Services : HTTP > New HTTP Profile… General Properties Name fallback_host Parent Profile http Settings Fallback Host http://www.google.com When done, click Finished

185 185


4.

Navigate to Local Traffic > Virtual Servers > vs_http and configure the following settings:

Local Traffic > Virtual Servers: Virtual Server List > vs_http Configuration HTTP Profile fallback_host When done, click Update 5. 6. 7. 8. 9.

Open up a browser session to http://10.10.1.100. What are the results? Navigate to Local Traffic > Pools > http_pool. Click on the Members tab and select the checkbox in the upper left corner of the table in order to select all pool members. After you have selected all pool members click Disable. Go back to the browser session again opened towards http://10.10.1.100 and press Ctrl+F5. What are the results?

Expected results When first attempting to access http://10.10.1.100 you should successfully connect to the virtual server and the pool members. However, when you have disabled all pool members and performed a hard refresh of the browser session, you should have been redirected to http://www.google.com.

Exercise Clean-Up 1.

Navigate to Local Traffic > Virtual Servers > vs_http and change the following configuration:

Local Traffic > Virtual Servers: Virtual Server List > vs_http Configuration HTTP Profile none When done, click Update 2. 3. 4.

Navigate to Local Traffic > Pools > http_pool. Click on the Members tab and select the checkbox in the upper left corner of the table in order to select all pool members. After you have selected all pool members click Enable.

Chapter Summary ▪

There are two different high-order types of load balancing methods, Static and Dynamic.

For some Load Balancing Methods, you can decide to base it on pool member level or node level. This function helps the administrator to create a more even load balancing when the end-server is hosting multiple services.

The Dynamic Ratio load balancing method distributes connections based on information the BIG-IP has gathered from the server itself. In order to gather this data, we need to assign a Performance Monitor on the node.

186 186


Priority Group Activation gives you the ability to create multiple prioritised ‘groups’ of pool members for redundancy. The primary group is used during normal operations and if the primary group’s active members fall below a configured threshold, secondary groups will activate until the minimum amount of pool members are reached.

Whenever the pool has no available pool members and the virtual server is down you can configure the BIGIP to send back an HTTP redirect to an apology server. This is known as a Fallback Host and is configured under the HTTP Profile.

Chapter Review 1. Which of the following load-balancing methods is distributing requests in a simplistic and evenly manner? a. b. c. d.

Ratio Least Connections Round Robin Predictive

2. Which is true regarding Dynamic Load-Balancing Methods? a. b. c. d.

They load balance traffic based on actual data rather than specified numbers. They always load balance traffic in an even manner. They only distribute requests based on data gathered from the end-servers. Dynamic Load-Balancing Methods are beneficial for times when the load of the servers is different.

187 187


3. Which pool member will receive the next request? (Least Connections) a. b. c. d.

Pool member 1 Pool member 2 Pool member 3 Pool member 4

4.Which pool member will receive the next request? (Ratio) a. b. c. d.

Pool member 1 Pool member 2 Pool member 3 Pool member 4

188 188


5. Which Load Balancing Method keeps track of the number of outstanding layer 7 requests? a. b. c. d.

Least Sessions Fastest Observed Predictive

6. Which Load Balancing Method evaluates each second the current active layer 4 connection count and assigns a ratio based upon the current connection count? a. b. c. d.

Least Connections Ratio Least Connections Observed Predictive

7. Which pool member will receive the next request? (Round Robin) a. b. c. d.

Pool member 1 Pool member 2 Pool member 3 Pool member 4

189 189


Chapter Review: Answers 1. Which of the following load-balancing methods is distributing requests in a simplistic and evenly manner? a. b. c. d.

Ratio Least Connections Round Robin Predictive

The correct answer is: c The Round Robin load balancing method is very simplistic in its nature and distributes connections evenly between all available pool members. This load balancing method is suitable if all servers have equal performance. 2. Which is true regarding Dynamic Load-Balancing Methods? a. b. c. d.

They load balance traffic based on actual data rather than specified numbers. They always load balance traffic in an even manner. They only distribute requests based on data gathered from the end-servers. Dynamic Load-Balancing Methods are beneficial for times when the load of the servers is different.

The correct answer is: a The Dynamic Load-Balancing methods distribute traffic to each pool member based on the servers’ performance or some other metric(s). When each load balancing decision is made, the BIG-IP device will evaluate the current metric(s) in order to select a pool member. Dynamic Load-Balancing methods do not always load balance traffic in an even matter. Take Observed and Predictive as examples. And they do not always distribute requests based upon data gathered from the servers. The only load balancing method that does this is Dynamic Ratio. Lastly, some dynamic load-balancing methods do not take the endserver’s load into consideration. One example is the least connection load balancing algorithm that bases it’s data upon the connection count and will keep this even between all end-servers as much as possible. 3. Which pool member will receive the next request? (Least Connections) a. b. c. d.

Pool member 1 Pool member 2 Pool member 3 Pool member 4

The correct answer is: a

190 190


4. Which pool member will receive the next request? (Ratio) a. b. c. d.

Pool member 1 Pool member 2 Pool member 3 Pool member 4

The correct answer is: c 5. Which Load Balancing Method keeps track of the number of outstanding layer 7 requests? a. b. c. d.

Least Sessions Fastest Observed Predictive

The correct answer is: b The Fastest load balancing-method keeps track of the number of outstanding layer 7 requests. An outstanding request is one where the BIG-IP has sent an application layer protocol request to the server but has yet to receive a response back. The BIG-IP device is waiting for the server to respond and keeps track of all of these connections. 6. Which Load Balancing Method evaluates each second the current active layer 4 connection count and assigns a ratio based upon the current connection count? a. b. c. d.

Least Connections Ratio Least Connections Observed Predictive

The correct answer is: c This method distributes connections based on the number of current active layer 4 connections each pool member or node has. Every second the BIG-IP looks at the current active connection count and assigns a ratio to each pool member or node. The pool member or node with the least amount of connections will receive a higher ratio and the one with the most connections will receive a lower ratio. 7. Which pool member will receive the next request? (Round Robin) a. b. c. d.

Pool member 1 Pool member 2 Pool member 3 Pool member 4

The correct answer is: c

191 191


7. Monitors One of the primary functions of the BIG-IP LTM is to load balance traffic between servers. Another significant additional core function is to monitor the real servers and their applications to make sure they are running properly and make informed load balancing decisions. If a server is not responding with the appropriate data to the user (or at all), we need to ensure that it will not receive any further requests by marking it as offline. A well configured monitor will do this automatically.

Overview In this chapter, we’ll discuss the multitude of monitors and monitoring features the BIG-IP LTM has available to help check that your servers and applications are running as they should and make intelligent load balancing decisions. Be aware, monitoring is a significant subject, often misunderstood and underestimated. Monitoring is a challenging business even at a basic functional level. The BIG-IP must send probes, packets and requests, keep track of them all, record responses (or lack thereof), keep and update timers, mark pool members offline or available, inform the wider system of status and do so on a continuous basis. It is quite a challenging task and the wide range of different monitor types surely makes it harder. BIG-IP LTM supports an extensive set of health and performance monitors which we’ll cover in some detail. If a node or pool member shows any sign of degraded performance, abnormal responses or unavailability according to a monitor, BIG-IP LTM can mark the host as unavailable and will not send new connections to it. It may also redirect traffic to another host or, if none are available and HTTP is in use, respond to the client with a redirect to a “sorry server” web page (as we covered in the earlier Load Balancing Methods chapter). The BIG-IP device offers many preconfigured monitor templates that can be used with minimal additional configuration. Following is a list of all the network objects that can be monitored by a BIG-IP device, depending on the module or modules installed: Local Traffic Manager (LTM) ▪ ▪ ▪

Nodes Pools Pool Members

DNS (formerly GTM) ▪ ▪ ▪ ▪ ▪

Links Servers Virtual Servers Pools Pool Members

Link Controller ▪ ▪ ▪

Links Pools Pool Members

192 192


Consider that the more sophisticated a monitor is, the heavier a burden it places on the monitoring system within TMOS (and likely your real servers). This should not put you off using such monitors but do be aware of the potential load they may add and plan accordingly. Alternatively, you may want to consider keeping the monitor simple and moving some of the more resource intensive checks to the servers themselves. For instance, a small program on the server could check a number of metrics (and perhaps ‘downsteam’ connectivity) and report status via a specific webpage. The BIG-IP could then use a simple HTTP monitor to check the page contains the text OK. You should also keep in mind that monitors (or the monitoring system) can be exploited for more than just checking server health or performance. For instance, monitoring for the existence of a web page can be used to take servers offline automatically by simply removing the page from the server. This is particularly useful when change control on your load balancers is stricter than on your servers. If you also have a separate server team, this gives them the ability to perform server maintenance without your involvement.

Health Monitors The purpose of a health monitor is to make sure an application is available and delivering suitable responses to the user. This is done using one or more health monitors, each sending specific requests to pool members or nodes and then expecting a specific response within a specified time period. If the BIG-IP system does not receive a response back from the one or many health monitors (depending on your configuration) within the configured timeout period, or if the response does not match what was expected, the BIG-IP system will mark the pool member/node as offline and will no longer send new connections or application layer requests to that host. Even though the host is marked offline, the BIG-IP system will continue to monitor the host and once it sends back a response that matches the configuration, the host is marked as available and new connections and requests will be sent to the host as normal.

Performance Monitors The purpose of a Performance monitor is to collect and review performance information from the host to which it is assigned. The BIG-IP system will use this information to make load balancing decisions. If the performance for a specific host is degraded or its load is excessive, the BIG-IP system will redirect traffic to another host until the performance or load returns to its normal level.

Intervals & Timeouts Just before we get into the detail, it’s worth mentioning that time can be challenging. It is something everyone thinks is easy (we can all tell the time, right?) but which in reality can be pretty hard to deal with logically. Consider the fact that some days have 23 hours, some 25 - or ask yourself if an unhandled leap second will have a positive or negative impact on monitoring. Don’t be fooled, and think carefully when considering the operation of the ‘sliding’ time window you effectively create with the combination of a monitoring interval and timeout.

193 193


When you configure a health monitor, you must set Interval and Timeout values. ▪

The Interval defines how often the monitor’s test will run

The Timeout defines a time window of how long the monitor will wait for a successful response to any check sent within that time window before it marks the resource as offline

The default Interval is 5 seconds and the default Timeout is 16 seconds. F5 recommends that you configure your timeout values to three times the Interval setting plus one second. Using the default settings this means (5 x 3) + 1 = 16. The health monitor will send a check every 5 seconds and if no successful response is received from any of them within 16 seconds the BIG-IP device will mark the resource as offline. The reason why F5 recommends these values is more easily explained when using an example. With a successful monitor the BIG-IP system sends out requests to the specified resource and it responds properly:

1.

2.

The BIG-IP device sends out its first test to the resource and the timeout counter starts. The test was successful and therefore the BIG-IP device will mark it as available. It will also reset the timeout value as the test was successful. Since we have configured the Interval value to 5 seconds, the BIG-IP device will send out another check after 5 seconds has passed. Again, the test was successful, so the BIG-IP continues to consider the resource as available and again resets the timeout value

In our next example the monitor fails the tests as the specified resource does not respond within the timeout period;

194 194


1. 2. 3. 4. 5.

The BIG-IP device sends out its first check to the specified resource. The timeout value is currently set to 0 seconds. The test is not successful, so the BIG-IP will not reset the timeout value. Since we have configured the Interval setting to 5 seconds, the BIG-IP will send a second check after 5 seconds. We still have not received any response from the resource and the timeout value continues to grow. 10 seconds later, a third check is sent to the resource. There is still no response back from the pool member. 15 seconds later, a fourth check is sent to the pool member. Yet again we still do not see a response back from the pool member. It is not until the 16th second that the BIG-IP system will mark the pool member as offline because the timeout value has reached it configured value.

The reason why F5 recommends these settings is because you want to be 100% sure that the resource is offline. The extra second provides a small extra bit of time for the last check to be responded to. Using the above example as reference, the BIG-IP system will send four checks before marking the host as offline, which means that you can be pretty sure that the resource actually is offline. This is just a recommendation and the settings you use are completely up to you and will depend on the type of the application you are providing through your BIG-IP system. In some scenarios, you might want to increase the interval value in order to lower the amount of traffic the BIG-IP system generates while monitoring nodes and pool members.

195 195


A simple ICMP request will have a very low impact on your network, but using a HTTP monitor that establishes a new TCP connection and sends a GET request every 5 seconds might result in high traffic loads.

Temporarily Failed Monitors The BIG-IP device uses health monitors to periodically monitor nodes and pool members at a frequency specified with the Interval setting. As we previously mentioned, if the health monitor fails to get a response back from the resource within the Timeout period, the BIG-IP system will mark the resource as offline, which is also called a permanent failure. But if the BIG-IP system constantly monitors a resource which is responding accordingly, what happens if there is a short network outage? If the connection between the BIG-IP system and the resource has briefly failed and no response to the monitor check reaches the BIG-IP, it will identify the resource as suspect. Note that the Timeout period has not yet been reached so the BIG-IP is not marking it as offline. When a resource is identified as suspect, it will not receive any new connections. But the BIG-IP will maintain the existing connections that it has established to the resource. If the network outage is quickly resolved and the resource is once again responding to the health monitor checks before the timeout value is reached, the BIG-IP will no longer consider it as suspect and new connections will be sent to it.

Where Can You Apply Health Monitors? There are presently four different ways to apply health monitors. You can apply monitors to all nodes when they are created using what is known as the Default Monitor which is configured under Local Traffic > Nodes > Default Monitor. The monitor you assign to the Default Monitor will be automatically assigned to all nodes configured on the BIG-IP system which currently do not have a Node Specific monitor. The most commonly used Default Monitor is the ICMP monitor.

196 196


You can also override the Default Monitor setting by going to Local Traffic > Nodes and then clicking on one of the nodes to enter its configuration. Under Configuration > Health Monitors you can select Node Specific where you can apply a specific monitor for that particular Node. This is displayed in the following image:

You can also apply health monitors to pools which will automatically assign these monitors to every pool member present in the pool. If you add a new pool member to the pool, it will also automatically be assigned the monitors configured under the pool settings. This is configured under Local Traffic > Pools > Select the Pool you would like to configure > Health Monitors. This is displayed in the following image:

197 197


Just like nodes, the pool monitor configuration can be overridden by selecting Member Specific monitors. In order to configure Member Specific monitors: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Log on to the WebGUI of the BIG-IP system using a web browser. In the main menu, go to Local Traffic > Pools. Select the Pool of which the Pool Member is a part. Click on the Members tab. Select the Pool Member you would like to configure. Change the configuration from Basic to Advanced. Under Health Monitors change the Inherit From Pool to Member Specific. Select the monitors you would like to use on the Pool Member and adjust the Availability Requirement if necessary - we’ll discuss this setting later in this section. Click Update to save your changes.

This is displayed in the following image:

198 198


In summary, you can apply health monitors at the following locations: ▪ ▪ ▪ ▪

Node Default Monitor – Using the Default Monitor, every node will receive the monitor configured under the Default Monitor unless it is overridden by a Node Specific monitor. Node Specific Monitor – Configured under each Node. Pool Monitor – These monitors are assigned to the pools and each pool member contained in the pool will be assigned the monitor unless it is overridden by a Member Specific Monitor. Member Specific Monitor – Configured under each Pool Member under the Advanced Configuration.

199 199


Monitoring Methods There are three high order Monitoring Methods available; ▪ ▪ ▪

Simple - is a host reachable? Active - the device actively generates application requests to confirm the host is providing the expected service. Passive - the device passively observes application requests to confirm the host is providing the expected service.

Simple Monitoring A Simple monitoring method determines simply whether a host is available or offline. There are currently three simple monitors available: ▪ ▪ ▪

Gateway ICMP ICMP TCP_ECHO

As a simple monitor does not have that much intelligence, it does not have many configuration options available. For instance, you cannot configure any Send or Receive string as you might with an Active Monitor.

Active Monitoring With active monitoring, the BIG-IP generates application traffic of some kind and actively probes the host and expects a specific response back from the node or pool member. This is regulated by the Send and Receive string configured on the monitor. If the host does not respond within the configured timeout, or if the response does not match the specified receive string the BIG-IP will mark the host as offline. Some examples of an active monitor are HTTP and FTP.

Passive Monitoring This monitoring method is also called Inband monitoring and as the name implies, it does not send any probes or requests to a host. Instead, it relies upon genuine end-system or user generated application traffic and monitors this for failures. In other words, it monitors traffic going to and from the pool member and if the pool member fails to respond to new incoming connections or return traffic to clients, within the configured timeout period, it will mark the host as offline. The positive thing with passive monitoring is that it does not generate any additional traffic and is quick with marking a pool member as offline, as long as there is a reasonable amount of network traffic. For very large pools consisting of thousands of pool members, passive monitoring may be the only option you have to monitor the pool members. Consider this, when using active monitoring, the BIG-IP system needs to generate a request and send it to the pool member or node and expect a reply. It needs to do this for every pool member/node that is configured with the monitor and using the standard values it will do this every 5 seconds. This will require system resources and when the amount of monitor requests increase, it will gradually affect performance. In some cases to an extent that it will kill the box or cause the monitors to fail (reaching the timeout value). How many pool members/nodes that can be monitored using active monitoring depends on the size of the BIG-IP system and what type of health monitors you use. But still, keep this in mind.

200 200


A passive monitor cannot check for specific responses and can be potentially slow with marking a pool member as available again. This is most likely caused by the lack of traffic a pool member marked as down will receive. Currently there is only one passive monitor available and that is the Inband monitor. There are four configurable settings for the Inband monitor, as follows: ▪

Failures - This specifies the number of failed responses that a pool member may send (within the Failure Interval) before the BIG-IP device marks the pool member as offline. The total number of failures can be a combination of both failed connection attempts and failed responses. The default value is 3. If your BIG-IP devices uses multiple tmm processes, the calculated Failures number may be based on per-process failures. This depends on the load balancing algorithm you are using.

Failure Interval – This specifies that if the BIG-IP system receives the specified number of Failures within the Failure Interval then the BIG-IP device will mark the pool member as offline. The default value is 30 seconds. Response Time – This specifies the number of seconds that the pool member has to respond with data. If the pool member responds but after the configured time the monitor will report this as a failure, even if the response is valid. This feature can be disabled by entering a value of zero (0). The default value is 10 seconds. Retry Time – Specifies the number of seconds that the BIG-IP device waits after marking a pool member as offline before the monitor starts requesting the status from the pool member. The default value is 300 seconds.

To give you an example: When a client is sending a request to a virtual server, the BIG-IP will select the best available pool member and send the request to that pool member. If the pool is configured to use an Inband monitor, it will monitor the response back from the pool member. If the BIG-IP fails to establish a connection to the pool member or does not receive a response within the Response Time (which is by default 10 seconds), it will classify this as a failure. The Inband monitor’s default settings state that it must detect at least 3 Failures before the monitor marks it as offline and these failures must also occur within the Failure Interval, which is, by default, 30 seconds. This means that as long as the Failures do not reach 3 within the Failure Interval of 30 seconds, the pool member will not be marked as offline and continue to receive requests. It is not until we have detected 3 or more failures within the Failure Interval of 30 seconds that the pool member is marked as offline. When the pool member has been marked as offline, the Retry Time kicks in. This setting makes sure that the pool member will not receive any new client requests until this Retry Time has passed, which is, by default, 300 seconds. When the Retry Time has passed, the BIG-IP will again try to send a client request to the pool member. If it receives a response, it will mark the pool member as available, but if it still does not receive a reply, it will continue to be marked as offline.

201 201


Do remember that this monitor is completely dependent on client traffic. The BIG-IP will only check the responses to clients coming from the pool members/nodes. This monitor is often used in conjunction with an Active Monitor to provide a measure of additional protection against intermittent failures where the active Monitor traffic is often successful but failures with ‘real’ traffic are occurring.

Benefits and Drawbacks With Passive and Active Monitoring Active Monitoring Benefits: ▪ ▪

More effective when identifying pool members or nodes as available. Can use Service and Content checks.

Drawbacks: ▪ ▪ ▪

It creates additional network traffic. Uses additional system resources on both the BIG-IP device and the pool members. Can be potentially slow when identifying members as offline.

Passive Monitoring Benefits: ▪ ▪ ▪ ▪

It examines real client requests. Does not create additional traffic. Does not use any additional system resources on either the BIG-IP device or the pool member. More effective in identifying pool members as offline.

Drawbacks: ▪ ▪

Cannot verify content or that services are running. Can be potentially slow when identifying members as available.

Types of Monitors Now that we’ve covered the basic monitor settings, usage, areas of application and methods, we’re ready to look at the types of monitors available. There are many pre-configured monitors included with a BIG-IP device but you can also create your own custom ones. Custom monitors are created using one of the pre-configured monitors as a template. An administrator can modify various settings including; send & receive strings, timeout and interval.

202 202


No matter if the monitor is pre-configured or custom, they are all divided into different types or categories based upon what they are measuring or ‘checking’. Those categories are: ▪ ▪ ▪ ▪ ▪ ▪

Address Application Content Performance Path Service

Each of these categories has their own special purpose and function. Just to add to the fun, some monitors can be considered of more than one type.

Address Check Monitor An address check monitor uses ICMP to verify that an IP address on the network is reachable. It simply sends a request to the specified IP address, and if a response is received the check passes. An address check monitor is associated with a node (in nearly all cases, one cannot be used with a pool or pool member). As noted earlier, a node monitor will also affect the availability of any pool members that the node forms a part of (its IP address and a service port). This means that if an address check monitor marks a node as offline, it will therefore mark any pool member(s) the node forms a part of as offline as well. This is regardless of the success of the monitors that are assigned to the pool member. Once the address check monitor has marked the node as available again, any related pool members will also be marked as available (but only if the monitors assigned to the pool member are also successful). Currently the only address check monitor that can be assigned to a pool member or pool (as it has a special purpose) is the Gateway ICMP monitor. Even though the Gateway ICMP monitor is assigned to the pool member (IP address and service port), it still only verifies the IP address. The benefit of this is that when the address check fails, only the pool member is marked offline, the node’s status does not change. The most common scenario for using the Gateway ICMP monitor is a function called Gateway Fail-Safe. Consider a scenario where you have two BIG-IP systems configured in a high-availability redundant setup that each have their own upstream gateway (ISP). BIGIP1 has the upstream gateway of 1.1.1.1 and BIGIP2 has the upstream gateway of 2.2.2.2. You then create one pool named gateway_pool1 that contains pool member 1.1.1.1 and another pool named gateway_pool2 that contains the pool member 2.2.2.2. For both of these pools, you assign a Gateway ICMP monitor that pings these addresses. Finally, you add each pool to the Gateway Fail-Safe configuration on each BIG-IP system. If the Gateway ICMP monitor fails for the pools, the pool will be marked as offline which triggers the Gateway Fail-Safe mechanism and the BIG-IP will act according to your configured settings.

203 203


An address check monitor does not determine the status of an application or service and its purpose is only to verify that the host is reachable. Therefore, an address check monitor might result in marking a node as available while the pool member linked to the node is offline (because the BIG-IP cannot verify that the service is running or delivering any data). This might result in a false sense of security; just because the host is available, this does not mean the application is.

An ICMP Monitor is also recommended to use in conjunction with a UDP monitor that is not configured with a receive string. In this scenario, when a UDP monitor sends a request to a pool member/node where the UDP port is unavailable, it will return an ICMP port unreachable message. If the pool member/node is offline due to a crash or a reboot it will not be able to send back this message and pool member/node will still be considered available. Adding an ICMP monitor together with the UDP monitor will solve this problem.

Application Check Monitors An Application Check Monitor interacts with pool members by sending multiple commands and/or requests and reviewing the resulting responses. One example of an Application Check Monitor is the FTP monitor which could (for example) connect to a server, log in using specified credentials, navigate to a specified folder and then download a file to /var/tmp/. If the download is successful, the resource is considered available. The process is illustrated in the following diagram:

1.

First the BIG-IP device establishes a TCP session with the FTP server. This is done through the TCP 3-Way Handshake.

204 204


2. 3. 4. 5.

Once the TCP session is established, the BIG-IP device will log on to the FTP server using the credentials specified in the configuration of the monitor. When the BIG-IP device has successfully logged on to the FTP server it will navigate to the folder (if necessary) and request the file specified in the configuration of the monitor. The FTP server will send the file to the BIG-IP device. If the transfer is successful, the BIG-IP device will mark the FTP server as available.

Content Check Monitors Content Check Monitors do not directly verify that a service is providing the correct application level functionality, only that responses contain expected content. There is a subtle difference; the correct response may only be given if the application is functioning correctly - it could also be given if it isn’t. It’s up to you, should you wish, to formulate a request that ‘proves’ as far as possible that the application is working. For instance, you could request the homepage from a web server and be happy as long as you get a response which contains some text string you know exists on that page. Alternatively, for a website that you know relies upon an external MySQL database, you could request a page that is populated with data from that database and then confirm the response you receive contains that data. You are only checking content, but, as well as testing the web server, you are also testing its backend data source. A content check will send an application level request, the contents of which are configured using a Send String. When the BIG-IP device receives a response back from the pool member, it will examine the contents. If the reply matches the configured Receive String, it will mark the pool member as available. If the pool member has failed to reply back within the configured timeout or if the reply does not match the receive string, the pool member will be marked as offline. An example of a pre-configured Content Check monitor is the HTTP monitor. This monitor has a send string of GET /\r\n which means it only sends a GET request to the default page of the web server (at path /). The receive string is blank which means that it does not matter what kind of response you get from the server it will mark it as available as long as one is received. One aspect of this that many oversee is that the default page of many web servers (particularly those hosting multiple sites) is the IIS or Apache standard page which immediately reveals what software and version you are running your site with. The IT-security team at your organisation will most likely disapprove and ask you to delete this default page or ‘site’. Continuing to use the standard HTTP monitor in this scenario will give the following result. The HTTP monitor will send a GET request asking for the default page of the web server, but the default page of the web server is no longer present. The web server will therefore reply back with a 404 Not Found but since the Receive String is blank, the check will be successful as a response was received. This is a very poorly configured monitor and the default settings should be changed in nearly all cases. To that end, if you’re going to use this monitor you should create a new custom one based on the HTTP monitor and at least add a suitable receive string. Here is an example of how a custom HTTP monitor might look: ▪ ▪

Send String: GET /index.html\r\n Receive String: 200 OK

205 205


It is critical to configure the receive string so that it contains content that guarantees as far as possible that the correct content is being served. In order to create a sophisticated yet simple monitor, you will most often need to involve the application team that manages the application itself. Using a web application as an example, let’s imagine it is dependent on a SQL server to get some of its content and without it, parts of the website will not function properly. The application team can create a standalone web page that runs a script that performs certain actions towards the back-end SQL server and if successful, the returned web page will contain the text “All is good”. Some of the actions that the script can perform could for instance be some SQL queries towards the SQL server and if these queries are successful you can be sure the SQL server is OK. However, if the SQL queries fail, the web page could instead contain the text “SQL server is down”. Let’s say this web page is located at the URI application_monitor.aspx. From our perspective as BIG-IP administrators, we would just have to create a Content Check Monitor with the following values: ▪ ▪

Send String: GET /application_monitor.aspx\r\n Receive String: “All is good”

The BIG-IP would send a GET request to the server requesting the application_monitor.apsx web page. If it contains the text “All is good”, then the monitor will be successful. If it instead contained the text “SQL server is down”, the monitor would fail.

Performance Check Monitors We previously discussed the load balancing method Dynamic Ratio and that it uses the SNMP DCA monitor in order to load balance traffic based on each pool member’s CPU, memory and disk usage. The SNMP DCA is an example of a Performance Check Monitor. The SNMP DCA monitor retrieves information by fetching performance data from a server running an SNMP agent. Using this performance data, the BIG-IP system builds up and assigns each node/pool member a certain weight. This weight is then used when load balancing decisions are made. Remember that performance check monitors will add additional load on the endservers and the BIG-IP system since the BIG-IP system must generate traffic in order to request information from the end-servers. It will also add an additional load on the BIG-IP system because it has to calculate the weight. Avoid using overly complex monitors if it is not necessary, to minimise the load on both endservers and the BIG-IP system.

Path Check Monitors Path Check Monitors are also referred to as Transparent Monitors and they determine if traffic can flow through a device. The path check monitor is successful when traffic can flow through, for example, routers or firewalls. One common scenario is when you have multiple upstream gateways (ISPs) configured on the BIG-IP system that resides in a gateway_pool.

206 206


The upstream gateways in this scenario are firewalls that route traffic out to the Internet and are pool members in a gateway_pool. This gateway pool is then used as the default route for the BIG-IP system, providing access towards the Internet. Since the firewalls are pool members in a pool, the BIG-IP system will load balance traffic just like any other pool, and in order to make sure that the firewalls are functioning correctly, you will need to create monitors. An example would be to check whether the Google DNS server at 8.8.8.8 is reachable through the firewalls. You do this by creating a Gateway ICMP monitor like the one in the following diagram:

Under the Alias Address, we configure the IP address of Google DNS server, which is 8.8.8.8 and then we check Yes under Transparent. If we just specify the Alias Address without enabling the Transparent setting, the BIG-IP system will simply send the ICMP Request to the Alias Address using the most specific route in its routing table. What the Transparent setting actually does is to send the request to the pool member on its layer 2 MAC address but using a different destination IP address. Therefore, in our case we would send the ICMP Request to the layer 2 MAC address of the firewall, but the IP destination would be 8.8.8.8. This makes the request flow through the device. This means that the BIG-IP system needs to have a layer 2 connection with the pool member/node in order for the transparent setting to work. The complete scenario is displayed in the following diagram:

207 207


Only these monitors support the transparent setting: ▪ ▪ ▪ ▪ ▪ ▪

TCP HTTP HTTPS TCP Echo TCP Half Open ICMP

Service Check Monitors The purpose of a service check monitor is to verify that the IP address and service (port) is up and running. One example of the service check monitor is the pre-configured TCP Half Open monitor. This monitor sends a TCP SYN packet to the pool member and if the pool member sends back a SYN-ACK, the monitor is successful. The BIG-IP device then closes the connection by sending back a reset packet (RST). If the monitor does not get a SYN-ACK response from the pool member on the specified port the monitor will fail. A Service Check Monitor only determines if the service is running on the specific pool member. It does not determine if the content the service is providing is correct.

208 208


Monitors - Advanced Options In this section we’ll briefly cover some more advanced monitor settings and considerations before we move on to monitor logging in some detail.

Slow Ramp Time The Slow Ramp Time is a setting designed to protect pool members from being overloaded with connection requests. When a pool member has changed status to available (from previously being marked as offline by a monitor or manually disabled/forced offline by an administrator), it will begin to receive new connections which can be to such an extent that it can overload the pool member. In order to prevent this, the Slow Ramp Time (enabled by default) is used to slowly increase the number of connection requests that are load balanced to the pool member. The setting is configured in seconds and specifies how long the pool member is in Slow Ramp Time. When the pool member is in Slow Ramp Time, the amount of connections it receives is based on a percentage of connections that should have been load balanced to it if the Slow Ramp Time setting would be disabled. And since the Slow Ramp Time is based on seconds, each second the percentage increases causing more connections to slowly increase on the pool member. To present an example. If we configure a Slow Ramp Time of 10 seconds and the pool member that changed status to available were to receive 1000 connections immediately (based on the load balancing method). With Slow Ramp Time, the 1st second it would just receive 1/10 of the connections resulting in only 100 connections. The 2nd second it would receive 2/10 of the connections resulting in 200 connections. With Slow Ramp Time this would continue until the full 10 seconds have been reached after which it will receive the full proportion of incoming traffic. Therefore, instead of sending 1000 requests right away, the pool member will gradually receive the connection requests.

Multiple Monitors & the Availability Requirement You can assign multiple monitors to a Node, Pool or Pool Member. If you do, you will also need to think carefully about the Availability Requirement setting. The Availability Requirement setting defines how many monitors must report a pool member or node as being available before that pool member/node is considered available. The default setting is All which means that all of the monitors you assign must mark the pool member/node as available. By default, the BIG-IP system logs all monitor status changes to the /var/log/ltm log file. However, these log entries do not define exactly which monitor changed the status of the object. This is a problem when assigning multiple monitors to a pool member or node since you will be unable to determine which of the monitors has changed the status of an object. This was solved in BIG-IP v11.4 by globally enabling logging of monitor status changes. To globally enable monitor status logging, please use the instructions in the Monitor Status Logging section.

209 209


Manual Resume This feature is a simple one but requires careful consideration of its operational implications if used. When enabled (disabled by default) a Node or Pool Member with a Monitor assigned cannot return to the Available state after being marked Offline. The Manual Resume setting sets the object as disabled once the health monitor fails which means that it can only become Available if an administrator changes the state of the object to Enabled.

Monitor Reverse Option Where a ‘normal’ monitor marks an object as Available when the probe is successful, a reverse monitor marks the object as offline. You would use this feature when it is difficult to identify a successful reply. Instead the monitor will mark the pool member or node as offline when it detects a success (negative is easier than positive). As an example, when probing a page on a motivational website, it may be easier to test for the presence of the text error in a response than success. Only these Monitors support Reverse operation: ▪ ▪ ▪

TCP HTTP HTTPS

Monitor Instances When you associate a Monitor with a Node, Pool or Pool Member, a dedicated instance of that Monitor is created. Where a Pool is concerned, an instance is created for every member in the Pool. A single Monitor can therefore have multiple instances, one for each Node and/or Pool Member it is associated with.

Administrative Partitions Be aware that because Monitor Instances are not partitioned objects (unlike the actual ‘source’ Monitor object(s) you configure), a user can disable or enable Monitor Instances associated with Nodes or Pool Members they otherwise have no administrative access to. In order to avoid this, you should ensure all Nodes, Pools and Pool Members associated with Monitor instances reside in the same Administrative Partition.

Firewalls From an IP perspective, the BIG-IP system will send health checks using the IP address of the non-floating self-IP of the egress VLAN where the nodes or pool members reside. In other words, the BIG-IP system will check its own routing table and determine which VLAN the nodes or pool member is reachable through and send the traffic out through that VLAN, using the non-floating self-IP address. If connectivity to the Nodes or Pool Members being monitored is established through a firewall, relevant rules to permit the traffic will be required.

210 210


Testing If you would like to test any Health Monitor against a particular server (before you use it for real) you can do so at the CLI as follows;

$ [tmsh] run util test-monitor ‘monitor_name’ ‘ip_address’ [‘port’]

Monitors - Logging Sometimes it is necessary to understand what is actually causing a monitor check to fail and mark an object as offline. One example I have experienced is the remote desktop monitor that is defined in the Deploying F5 with Microsoft Remote Desktop Services deployment guide. In this guide, you will find information on how you should configure the send and receive strings, which are a combination of different hexadecimal characters. I applied the monitor to the pool, but half of the remote desktop servers were marked as offline. I enabled the Monitor Action Logging and in the logs, I could see that that the pool members that failed were sending back a different hexadecimal code. This is because Microsoft had changed the code in a recent patch. In order to get the monitor working with the failed pool members, I had to add the new hexadecimal code to the receive string of the monitor. Monitor action logging can be enabled on a per Node and Pool Member basis since v11.4. All messages are logged to the directory: /var/log/monitors. When monitor logging is enabled, separate log files for each node or pool member that has the feature enabled are created, for every monitor assigned to the node or pool member. This means that if a pool member/node has multiple health monitors, it will create multiple files, one for each health monitor. In order to view their content, you can for instance use the cat command as shown earlier. If you enable this feature, it is disabled upon reboot and the setting is not synchronised. Log rotation and compression occur as normal with these files. In all cases when monitor logging is enabled and then later disabled, any log files created will remain until deleted.

Enable Monitor Logging on Node Level 1. 2. 3. 4. 5. 6.

Log on to the WebGUI of the BIG-IP system using a web browser. In the main tab go to Local Traffic > Nodes. Select the Node for which you would like to enable Monitor Logging for. Click the Enable Checkbox for Monitor Logging. Click Update to save your changes. The BIG-IP system will now create log files for the monitors assigned to the node in the /var/log/monitors directory.

211 211


Enable Monitor Logging on Pool Member Level 1. 2. 3. 4. 5. 6. 7. 8.

Log on to the WebGUI of the BIG-IP system using a web browser. In the main tab go to Local Traffic > Pools. Click on the Pool for which the Pool Member is a member of. Click on the Members tab. Click on the Pool Member for which you would like to enable Monitor Logging for. Click the Enable Checkbox for Monitor Logging. Click Update to save your changes. The BIG-IP system will now create log files for the monitors assigned to the pool member in the /var/log/monitors directory.

Enabling Monitor Logging for SNMP DCA/DCA Base The SNMP DCA and SNMP DCA Base performance monitors can also have logging enabled. This feature is not enabled using the usual Monitor Logging option on the Pool Member or Node like other monitors. Instead it is enabled by modifying the BigDB database. When enabled, the SNMP DCA and SNMP DCA Base will log to the file: /shared/tmp/snmpdca.log. Enabling SNMP DCA or SNMP DCA Base Monitor Logging can have a significant impact on the BIG-IP system, depending on how many nodes and/or pool members are configured with the monitor. A high number can cause the log file to grow excessively large, and F5 recommends only enabling this feature for a short period of time. 1. 2. 3.

4.

Launch a terminal client such as PuTTY and open an SSH session to the management IP address of the BIGIP system. Log in using the credentials you have configured for your system. You will either be at a Linux host shell prompt or directly in the tmsh. This is indicated by the prompt in the terminal program; a. Linux Host: config # b. TMSH: (/Common)(tmos)# If you are not in the tmsh enter it by typing the following command:

tmsh 5.

Enable monitor logging of SNMP by issuing the following command:

modify /sys db snmp.snmpdca.log value true 6. Save the configuration by issuing the following command: save /sys config 7.

The SNMP DCA/DCA Base will now log to the file /shared/tmp/snmpdca.log

Disabling Monitor Logging for SNMP DCA/DCA Base 1. 2.

212 212

Launch a terminal client such as PuTTY and open an SSH session to the management IP address of the BIG-IP system. Log in using the credentials you have configured for your system.


3.

4.

You will either be at a Linux host shell prompt or directly in the tmsh. This is indicated by the prompt in the terminal program; a. Linux Host: config # b. TMSH: (/Common)(tmos)# If you are not in the tmsh enter it by typing the following command:

tmsh 5.

Disable monitor logging of SNMP by issuing the following command:

modify /sys db snmp.snmpdca.log value false 6.

Save the configuration by issuing the following command:

save /sys config Disabling Monitor Logging for the SNMP DCA or SNMP DCA Base monitors will not automatically delete the file /shared/tmp/snmpdca.log. This will have to be done manually by the BIG-IP administrator.

Object Status As well as preventing traffic being sent to unhealthy, busy or unavailable nodes and pool members, monitoring also provides us with a real-time view of the status of servers and other resources we direct traffic to and manage protocols for.

The Different Object Status Icons When you are administrating a BIG-IP system it is very important to understand the different object status icons. Objects represent ‘real world’ host and service configuration elements; nodes, pool members, pools and virtual servers. All of these different objects are displayed with a certain icon depending on health monitor status, if a connection limit is reached or if there is no configured health monitor. In the following diagram, you can see all of the different Object Status Icons and what they mean.

213 213


Object State Object Status is defined by health monitor status and/or connection limit status and is identified by the shape of the indicator. Object States are defined by the BIG-IP administrator and are identified by the colour of the indicator. An administrator can change an object’s state for maintenance purposes or other purposes. The object state can be changed through the WebGUI or tmsh. There are currently three different states: ▪ ▪

Enabled – The object is available and ready to receive traffic. Disabled – The object continues to process only existing persistent and active connections. It will still accept new incoming requests as long as it has a current and existing persistence record. It will not accept other new connections. Forced Offline – The object continues to process traffic, but only existing connections and only if they have not timed out.

When an object has been either Disabled or Forced Offline, the colour of the status icon will change to black. ▪ ▪

The shape of the object indicates the status which is based on the health monitor or connection limit status. The colour of the object indicates the actual status of the object.

Here is another diagram displaying the objects when they are in a disabled state:

214 214


To summarise, when verifying whether an object will receive traffic, you will need to be aware of both the Object State and the Object Status. For instance, if you have a monitor that marks a pool member as available (circle icon) but the BIG-IP administrator has marked it as either Disabled or Forced Offline (black) the display will show a black circle. This means that it will not accept connections unless they are related to existing persistent connections (if it is marked as Disabled).

Understanding Object Status Hierarchy As previously mentioned, the monitors for BIG-IP LTM can be assigned to nodes, pool members and pools. But one important thing to remember is that a monitor status has an effect on the virtual server as well. This is because there is a parent-child hierarchy between all of these objects. This is displayed in the following diagram:

215 215


To summarise, a child object will inherit its parent’s status. This is because there is a parent-child hierarchy between all of these objects with the node as the root. This means that for instance if a node is offline then all of the pool members assigned to that node will be offline. Another level up in the hierarchy, if at least one pool member is available then the pool will be available. But if all pool members are offline, the pool will be offline. In the next level, if the pool is offline, the Virtual Server is offline. All of this is demonstrated in the following diagram:

216 216


Since the node is Offline, all of the pool members created using that node will be Offline. This will affect the pool as the pool member will be marked Offline. However, since the pool has two other pool members that are Available the pool will be marked as Available, thus making the Virtual Server Available.

217 217


Since we can assign monitors to both nodes and pool members, you might end up in a scenario where the pool member is in fact offline while the node is available. This is because we might have a monitor on the node that is just sending ICMP requests and these are successful. However, the monitor assigned to the pool member is verifying the service running on that node and this service is presently offline, which causes the monitor to fail. This results in the pool member being Offline. This also affects the pool but since the pool has two other pool members that are Available, the pool will be marked as Available thus making the Virtual Server Available.

218 218


In this scenario, the node is Available because the ICMP monitor assigned to the node is successful. But the monitor assigned to pool member is unsuccessful, causing it to be Offline. In this scenario, all of the other pool members within the pool are also Offline. This causes the pool to be marked as Offline and since there is no available pool, the Virtual Server will also be marked as Offline. The final decision on whether the object will receive new incoming requests is dependent on both object status and the current state of the object.

219 219


When Will the BIG-IP System Send Traffic to a Node/Pool Member? So far, we have gone through the different object statuses and states, but when will the BIG-IP actually send traffic to the nodes/pool members? And also, what type of traffic? We have compiled a complete summary: Available ▪ ▪ ▪

New Connections: Yes Active Connections: Yes Persistent Connections: Yes

Offline ▪ ▪ ▪

New Connections: No Active Connections: No Persistent Connections: No

New Connections: No Active Connections: Yes Persistent Connections: No**

Unknown ▪ ▪ ▪

▪ ▪ ▪

New Connections: No Active Connections: Yes Persistent Connections: Yes

Offline (Disabled)

Unavailable ▪ ▪ ▪

Available (Disabled)

New Connections: Yes Active Connections: Yes Persistent Connections: Yes

▪ ▪ ▪

New Connections: No Active Connections: No Persistent Connections: No

Unavailable (Disabled) ▪ ▪ ▪

New Connections: No Active Connections: Yes Persistent Connections: No**

Unknown (Disabled) ▪ ▪ ▪

New Connections: No Active Connections: Yes Persistent Connections: Yes

Available (Forced Offline) ▪ ▪ ▪

New Connections: No Active Connections: Yes Persistent Connections: No

Offline (Forced Offline) ▪ ▪ ▪

New Connections: No Active Connections: No Persistent Connections: No

Unavailable (Forced Offline) ▪ ▪ ▪

New Connections: No Active Connections: Yes Persistent Connections: No

Unknown (Forced Offline) ▪ ▪ ▪

New Connections: No Active Connections: Yes Persistent Connections: No

** By default, for pool members, persistent connections will not be sent to the same pool member if the connection limit is reached. However, using the option Override Connection Limit in the persistence profile, you can change this behavior and the BIG-IP system will still send to the same pool member even though the connection limit has been reached. Specifying a connection limit per virtual server will limit the connection count regardless of the Override Connection Limit.

Local Traffic Summary The Local Traffic Summary provides a useful table of all the different local traffic object types, their status and the total number of each. It uses the running configuration to obtain the list of all relevant objects and then queries their health monitor status. Here is an image of the Local Traffic Summary screen:

220 220


Local Traffic Network Map The Local Traffic Network Map displays the same information as the Local Traffic Summary but in an alternative, visual, hierarchical form. Each virtual server, its associated Pool, each Pool Member and any assigned iRules are displayed in a hierarchical form within a dedicated box. Additional information is displayed when you hover your mouse over an object. This can for instance be the parent node of a pool member or the destination address or partition of a virtual server. Nodes are only displayed when you hold your mouse over the pool member object.

Filtering Results One common issue regarding the Local Traffic Network Map is if you have a configuration containing a large number of objects. This might result in your browser being unable to properly render the map. If this occurs, you will have to try and minimise the output by either filtering or searching. You can easily filter out the information displayed in the network map by using the Status or Type dropdowns in the filter bar.

221 221


For instance, if you want to only display available pool members you could set the Status to Available and the Type to Pool Members and then click on Update Map. There is also a search function where you can search for several object attributes including name, IP address and IP address:port. This can be done with both IPv4 and IPv6 addresses. All searches are processed with an implied wildcard surrounding the search string. This means that if you search for 172 the BIG-IP is really searching for *172*. If you specifically use a wildcard in your search, the automatic wildcards are not added. This means that if you search for 172* it will only search for IP addresses that start with 172. The search results will only display objects which are associated with a virtual server. For instance, if you do not assign a default pool to a virtual server, the pool will not be displayed in the network map. In some scenarios, you do not assign a default pool but instead select a pool through an iRule assigned to the virtual server. In this case, the iRule will be displayed in the network map instead. If an object you are searching for is not displayed, you may have to use the option Search iRule Definition which will also search the content of iRules.

Verifying Object Status There are several ways to check the status of objects. In the WebGUI they are located on the following pages: WebGUI Page Network Map

Location Local Traffic – Network Map

Virtual Servers

Local Traffic – Virtual Servers

Pools Pool Members

Local Traffic – Pools Local Traffic – Pools: Pool List – [pool name] Local Traffic - Nodes

Nodes

222 222

Description A summary of all virtual servers and the objects that are associated with them. This includes pool, pool members, nodes and iRules A list of all virtual servers and their current status A list of all pools and their current status A list of all pool members and their current status A list of all nodes and their current status


Using the CLI (tmsh) to Verify Object Status There are several commands you can use to determine the status of an object. Here’s a complete list: Description Show Detailed Summary Statistics For All Related Objects Virtual Server Status Pool/pool member status Node Status

tmsh command tmsh show /ltm virtual detail tmsh tmsh tmsh tmsh

show show show show

/ltm /ltm /ltm /ltm

virtual [virtual_server_name] pool [pool_name] pool [pool_name] all-properties node [node_IP]

Monitor Status Logging It’s often useful to log changes to the status of nodes and pool members so that you have a historical, chronological record of changes and events across your infrastructure, from the BIG-IP system’s (usual wide ranging) point of view. This may be critical for troubleshooting purposes and also help with analysis of performance and availability and possibly other measures.

Enabling Monitor Status Logging 1. 2. 3.

4.

Launch a terminal client such as PuTTY and open an SSH session to the management IP address of the BIGIP system. Log in using the credentials you have configured for your system. You will either be at a Linux host shell prompt or directly in the tmsh. This is indicated by the prompt in the terminal program; a. Linux Host: config # b. TMSH: (/Common)(tmos)# If you are not in the tmsh, enter it by typing the following command:

tmsh 5.

Verify the current status of the bigd.lognodestatuschange db key by entering the following command:

list /sys db bigd.lognodestatuschange 6. If the status is disabled, enable the bigd.lognodestatuschange db key by entering the following command: modify /sys db bigd.lognodestatuschange value enable 7.

In order to save your changes, please enter the following command:

save /sys config In order to verify that the procedure worked you can review or monitor the log file /var/log/ltm by utilising the following commands: To monitor the log file:

$ tail -f /var/log/ltm | grep -Ei “detected.mon|monitor.status”

223 223


To review the log file:

$ cat /var/log/ltm | grep -Ei “detected.mon|monitor.status” | more In order to see the difference, here’s the output for when the bigd.lognodestatuschange key is disabled and one where it is enabled: With bigd.lognodestatuschange disabled:

local/bigip-1 notice mcpd[3741]: 01070638:5: Pool member 10.10.100.25:80 monitor status down. local/bigip-1 notice mcpd[3741]: 01070727:5: Pool member 10.10.100.25:80 monitor status up. Example output taken after enabling the key:

local/bigip-1 notice bigd[3747]: 01060001:5: Service detected DOWN for ::ffff:10.10.100.25:80 monitor http. local/bigip-1 notice mcpd[3741]: 01070638:5: Pool member 10.10.100.25:80 monitor status down. local/bigip-1 notice bigd[3747]: 01060001:5: Service detected UP for ::ffff:10.10.100.25:80 monitor http. local/bigip-1 notice mcpd[3741]: 01070727:5: Pool member 10.10.100.25:80 monitor status up. Disabling Monitor Status Logging 1. 2. 3.

4.

Launch a terminal client such as PuTTY and open an SSH session to the management IP address of the BIGIP system. Log in using the credentials you have configured for your system. You will either be at a Linux host shell prompt or directly in the tmsh. This is indicated by the prompt in the terminal program; a. Linux Host: config # b. TMSH: (/Common)(tmos)# If you are not in the tmsh enter it by typing the following command:

tmsh 5.

Verify the current status of the bigd.lognodestatuschange db key by entering the following command: list /sys db bigd.lognodestatuschange

6.

If the status is enabled, disable the bigd.lognodestatuschange db key by entering the following command:

modify /sys db bigd.lognodestatuschange value disabled 7.

In order to save your changes please enter the following command:

save /sys config There is no impact to the BIG-IP system when enabling the database key.

224 224


Monitor Status Changes in the BIG-IP LTM Log Logs are a great way to verify whether an object has been marked offline by a health monitor and when. You can either find this information in the web based WebGUI under System > Logs > Local Traffic or at the CLI by inspecting the log file: /var/log/ltm. Here is some example output:

Aug 18 02:24:43 bigip1 notice mcpd[4806]: 01070638:5: Pool \**/Common/http_pool member\** \**/Common/Server1:80\** monitor status \**down\**. [ was node down for 0hr:38mins:23sec ] A great way to find out if an object has gone offline is to use the following command at the CLI:

$ cat /var/log/ltm | grep object_name You could also use the more or less commands instead of cat. If the output is considerable, less lets you scroll back with the arrow keys, whereas cat and more do not. The cat command will display the content of the file /var/log/ltm and grep will filter the output to only show lines containing the object_name (or any text you specify). This provides a quick method of finding the information you need. The grep command is case sensitive, use the -i parameter to make it insensitive.

Lab Exercises: Monitors Exercise 4.1 – Configuring a Default Node Monitor Exercise Summary In this exercise, we’ll configure a default node health monitor. In this lab we’ll perform the following; ▪ ▪

Configure the Default Node Monitor Observe it’s behaviour

225 225


Exercise Prerequisites Before you start this lab exercise, make sure you have the following; Network access to the BIG-IP system’s management port Have one or more servers configured on the internal network that we can load balance traffic to. This should already be configured during the Building a Test Lab chapter

▪ ▪

Configuring a Default Node Monitor 1. 2.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Local Traffic > Nodes and answer the following questions:

Local Traffic > Nodes : Node List What are the current status of the nodes? Why does it have the current status? Will the BIG-IP load balance traffic to these nodes? The current status of the nodes should be Unknown. This is because we do not have a monitor assigned to them since we have not configured this in any of the previous lab exercises. The BIG-IP will load balance traffic to these nodes as you will most likely have already experienced. 3. 4.

In the Local Traffic > Nodes list, click on the node 172.16.100.1 in order to check the node’s current Health Monitor setting. Once you are on the Local Traffic > Nodes: Node List > 172.16.100.1, verify that it has the following Health Monitor setting:

Local Traffic > Nodes : Node List > 172.16.100.1 Configuration Health Monitors Node Default This is the default setting for all nodes configured on the BIG-IP system. This means that all monitors we assign to the Default Monitor will be automatically used by all nodes configured on the BIG-IP system if not specifically changed on each node. 5. 6.

7. 8.

Navigate to Local Traffic > Nodes > Default Monitor in order to assign a monitor to the default monitor. Once you are on the Local Traffic > Nodes > Default Monitor page, select icmp from the Available list and press the arrow button << in order to move it from the available list to the Active list. When done, click Update. Navigate back to Local Traffic > Nodes and verify the current status. What has changed? How fast was the change? Now all nodes are being sent ICMP requests “pings” in order to verify that the host is up. Do note that this will not verify the current services (HTTP and HTTPS) that is currently running on these hosts.

226 226


Exercise 4.2 – Configuring Pool Member Monitors Exercise Summary In this exercise, we’ll configure health monitors for pool members using both the default and custom-built monitors. In this lab we’ll perform the following: ▪ ▪

Configure Monitors for Pool Members. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Have one or more servers configured on the internal network to which we can load balance traffic. This should already be configured during the Building a Test Lab chapter.

▪ ▪

Verifying Current Configuration Before we go ahead and configure our monitors and assign them to our pool, let’s observe the current behavior. 1. 2. 3. 4. 5. 6.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Statistics > Module Statistics > Local Traffic and under Statistics Type select Pools. Expand the pool http_pool by clicking the + sign. Clear the statistics for the http_pool. Verify that your virtual server and pool members are working as expected by opening up a new browser session to http://10.10.1.100. Refresh the browser session 5-10 times by pressing Ctrl+F5. Review the statistics once again by clicking Refresh on the Statistics > Module Statistics > Local Traffic page. Have all three pool members received traffic?

Applying a Pool Health Monitor 1. 2. 3. 4.

5. 6. 7. 8.

Navigate to Local Traffic > Pools and verify the status of the pool http_pool. What is the current status of the pool? Its status should be Unknown. Click on the pool http_pool to enter its configuration. Once you are on the Local Traffic > Pools: Pool List > http_pool, under the Configuration section and Health Monitors, select http in the Available section and click on the << arrows in order to move it to the Active section. This will assign the default http monitor to the http_pool. Click Update to save the configuration. Navigate to Local Traffic > Pools, what are the current status for http_pool? The pool should now be Available. In the Pools: Pool List click on http_pool. Click on the Members tab, what are the current status for the pool members? All of the pool members should be Available.

227 227


Creating a Custom Pool Monitor Since the HTTP monitor are not very efficient in its default state, we’ll in the next section configure a custom monitor where we add a specific Send and Receive string. 1. 2.

Navigate to Local Traffic > Monitors and in the upper right corner click Create. Once you are on the Local Traffic > Monitors > New Monitor… page, add the following configuration:

Local Traffic > Monitors > New Monitor… General Properties Name custom_http Type HTTP Parent Monitor http Configuration Interval 5 seconds Timeout 16 seconds Send String GET /index.php\r\n Receive String Server 1 When done, click Finished 3. 4.

Navigate to Local Traffic > Pools > http_pool Once you are on the Local Traffic > Pools: Pool List > http_pool, configure the following settings:

Local Traffic > Pools : Pool List > http_pool Configuration Health Monitors custom_http When done, click Update 5. 6.

Once you have updated the configuration, click on the Members tab. What is the current status of the pool members? Why? The expected results are that pool member 172.16.100.1 will immediately go Available while pool members 172.16.100.2 and 172.16.100.3 will go Offline after 16 seconds. This is because the Receive String is currently set too strict. The Receive String will only consider the health monitor to be up if the string “Server 1” is present, which Server 2 and 3 do not have, causing them to fail. We need to adjust the Receive String so that it includes Server 2 and 3 as well.

228 228


7.

Navigate to Local Traffic > Monitors > custom_http and modify the following configuration:

Local Traffic > Monitors > custom_http Configuration Receive String Server [1-3] When done, click Update 8.

Navigate to Local Traffic > Pools > http_pool and click on the Members tab. What is the current status of the pool members? Was the change immediate? The expected results are that pool member 172.16.100.2 and 172.16.100.3 will immediately go Available because we changed the Receive String to also include Server 2 and Server 3. The change is immediate because as soon as the BIG-IP system receives a reply from the pool members that matches the receive string the pool members will be considered available.

Exercise 4.3 – Testing the Receive Disabled String Exercise Summary In this exercise, we’ll experiment with the Receive Disabled String using the monitors we created in the previous exercise. In this lab we’ll perform the following: ▪ ▪

Experiment with the Receive Disabled String. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Have one or more servers configured on the internal network to which we can load balance traffic. This should already be configured during the Building a Test Lab chapter.

▪ ▪

Configuring and Testing the Receive Disabled String 1. 2. 3. 4. 5.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Local Traffic > Pools > http_pool and click on the Members tab. Verify the current status of the pool members. They should all be Available from the previous lab exercise. Navigate to Local Traffic > Monitors > custom_http. Once you are on the Local Traffic > Monitors > custom_http page, add the following configuration:

229 229


Local Traffic > Monitors > custom_http Configuration Receive String Server 2 Receive Disabled String Server 3 When done, click Update 6.

Navigate back to Local Traffic > Pools > http_pool and click on the Members tab. What is the current result? Why?

Expected Results The health monitor will now only mark Server 2 as available because how the Receive String is configured. If this is not matched, the pool member will be considered offline. In our case we have also configured a Receive Disabled String. This means that if the server replies a string that matches the Receive Disabled String it will Disable the pool member instead of marking it as available. This is primarily used for maintenance purposes so that the application team themselves can take a server offline by having it send back a specific response string. The end result will be the following: 172.16.100.1:80 – Offline 172.16.100.2:80 – Online 172.16.100.3:80 – Disabled

▪ ▪ ▪

Exercise 4.4 – Testing the Manual Resume Feature Exercise Summary In this exercise, we’ll experiment with the Manual Resume feature using the monitors we created in the previous exercise. In this lab we’ll perform the following: ▪ ▪

Experiment with the Manual Resume feature. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Have one or more servers configured on the internal network to which we can load balance traffic. This should already be configured during the Building a Test Lab chapter.

▪ ▪

Configuring and Testing Manual Resume 1. 2. 3.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Local Traffic > Monitors > custom_http. Once you are on the Local Traffic > Monitors > custom_http page, verify that it has the following configuration:

230 230


Local Traffic > Monitors > custom_http Configuration Configuration Advanced Interval 2 Timeout 7 Send String GET /index.php\r\n Receive String Server [1-3] Receive Disabled String Blank When done, click Update 4. 5.

Navigate to Local Traffic > Pools > http_pool and click on the Members tab. What is the current status of the pool members? They should all be Available. Navigate back to Local Traffic > Monitors > custom_http and change the following configuration:

Local Traffic > Monitors > custom_http Configuration Configuration Advanced Manual Resume Yes Receive String Server 1 When done, click Update 6. 7.

Navigate to Local Traffic > Pools > http_pool and click on the Members tab. What is the current status of the pool members? What happened? Navigate back to Local Traffic > Monitors > custom_http and change the following configuration:

Local Traffic > Monitors > custom_http Configuration Configuration Advanced Manual Resume Yes Receive String Server [1-3] When done, click Update 8.

Navigate to Local Traffic > Pools > http_pool and click on the Members tab. What is the current status of the pool members? What happened? 9. Re-enable pool members 172.16.100.2 and 172.16.100.3 by selecting them and clicking Enable. 10. All members will now once again receive traffic.

231 231


Expected Results When we enabled the Manual Resume feature and changed the Receive String to Server 1 this will cause the pool members 2 and 3 to fail. We then change the Receive String back to its original state Server [1-3] which works for all pool members. However, when we verify the status, the pool members 2 and 3 are in an Offline (Disabled) state. This is because the Manual Resume feature prohibits the pool members from going Available again without an administrator intervention. Therefore, when we re-enable the pool members 2-3 their status once again goes back to Available.

Clean-up 1.

Navigate back to Local Traffic > Monitors > custom_http and change the following configuration:

Local Traffic > Monitors > custom_http Configuration Configuration Advanced Interval 5 Timeout 16 Manual Resume No Send String GET /index.php\r\n Receive String Server [1-3] When done, click Update

Chapter Summary ▪

The purpose of a health monitor is to make sure an application is available and delivering suitable responses to the user. This is done using one or more health monitors, each sending specific requests to pool members or nodes and then expecting a specific response within a specified time period.

The purpose of a Performance monitor is to collect and review performance information from the host it is assigned to. The BIG-IP system will use this information to make load balancing decisions.

When you configure a health monitor, you must set Interval and Timeout values. The Interval defines how often the monitor’s test will run. The Timeout defines a time window of how long the monitor will wait for a successful response to any check sent within that time window before it marks the resource as offline.

A Simple monitoring method determines simply whether a host is available or offline. There are currently three simple monitors available, which are Gateway ICMP, ICMP and TCP_ECHO.

With active monitoring, the BIG-IP generates application traffic of some kind and actively probes the host and expects a specific response back from the node or pool member. This is regulated by the Send and Receive string configured on the monitor.

The passive monitoring method is also called Inband monitoring and as the name implies, it does not send any probes or requests to a host. Instead it relies upon genuine end-system or user generated application traffic and monitors this for failures.

All monitors are divided into different types or categories based upon what they are measuring or ‘checking’. Those categories are, Address, Application, Content, Performance, Path and Service.

232 232


â–Ş

Path Check Monitors are also referred to as Transparent Monitors and they determine if traffic can flow through a device. The path check monitor is successful when traffic can flow through, for example, routers or firewalls.

Chapter Review 1. What is the default monitor timeout value? a. b. c. d.

16 seconds 5 seconds 20 seconds 10 seconds

2. You would like to configure a monitor that all nodes use by default. What type of monitor do you configure? a. b. c. d.

Pool Monitor Node Default Member Specific Monitor Node Specific Monitor

3. What are the drawbacks of using Passive Monitoring? a. b. c. d.

It creates additional network traffic. Uses additional system resources on both the BIG-IP device and the pool members. Can be potentially slow when identifying members as offline. Cannot verify content or that services are running.

4. What type of monitor is the SNMP DCA monitor? a. b. c. d. e. f.

Address Application Content Performance Path Service

5. What type of monitor is the HTTP monitor? a. b. c. d. e. f.

Address Application Content Performance Path Service

233 233


6. By default, where does the BIG-IP system log all monitor status changes to? a. b. c. d.

/var/log/ltm /var/log/apm /var/log/messages /var/log/snmpd.log

234 234


Chapter Review: Answers 1. What is the default Monitor timeout value? a. b. c. d.

16 seconds 5 seconds 20 seconds 10 seconds

The correct answer is: a The reason why F5 recommends these settings is because you want to be 100% sure that the resource is offline. Using this value, the BIG-IP system will send four checks before marking the host as offline, which means that you can be pretty sure that the resource actually is offline. 2. You would like to configure a monitor that all nodes use by default. What type of monitor do you configure? a. b. c. d.

Pool Monitor Node Default Monitor Member Specific Monitor Node Specific Monitor

The correct answer is: b You can apply monitors to all nodes when they are created using what is known as the Default Monitor which is configured under Local Traffic > Nodes > Default Monitor. The monitor you assign to the Default Monitor will be automatically assigned to all nodes configured on the BIG-IP system which currently do not have a Node Specific monitor. The most commonly used Default Monitor is the ICMP monitor. 3. What are the drawbacks of using Passive Monitoring? a. b. c. d.

It creates additional network traffic. Uses additional system resources on both the BIG-IP device and the pool members. Can be potentially slow when identifying members as offline. Cannot verify content or that services are running.

The correct answer is: d

235 235


4. What type of monitor is the SNMP DCA monitor? a. b. c. d. e. f.

Address Application Content Performance Path Service

The correct answer is: d The SNMP DCA is an example of a Performance Check Monitor. The SNMP DCA monitor retrieves information by fetching performance data from a server running an SNMP agent. 5. What type of monitor is the HTTP monitor? a. b. c. d. e. f.

Address Application Content Performance Path Service

The correct answer is: c The HTTP monitor is an example of a pre-configured Content Check monitor. However, to make it a true content check monitor you will need to configure a receive string. Otherwise the BIG-IP system will simply send an HTTP request and expect any reply. 6. By default, where does the BIG-IP system log all monitor status changes to? a. b. c. d.

/var/log/ltm /var/log/apm /var/log/messages /var/log/snmpd.log

The correct answer is: a By default, the BIG-IP system logs all monitor status changes to the /var/log/ltm log file.

236 236


8. Profiles Profiles are configuration objects that allow you to define and control how the system processes different types of traffic, protocols and applications. Profiles can then be applied to one or more virtual servers that will process this traffic. Profiles can be extremely powerful and are the primary mechanism used to assign advanced functionality beyond ‘basic’ load balancing. They are assigned to one or more virtual servers and provide the ability to; ▪ ▪ ▪ ▪ ▪

Understand and parse network and application level protocols Manipulate the behaviour of or modify protocols Improve throughput and application performance Offload intensive activities from real servers Enable persistence and authentication

Why Use Them? Taking HTTP as an example; if you want to examine and manipulate HTTP headers you must assign a HTTP profile to the virtual server in question. The profile provides the intelligence required for the virtual server to inspect and understand the application layer protocol contained within the traffic it handles. Since the BIG-IP device is a full application proxy, this enables us to safely (and transparently) manipulate and modify the traffic in both directions (both client side and server side). FTP provides another example. Let’s say that you setup an FTP server which your clients can access and then between the two, you install a BIG-IP device. The FTP server is configured to use active FTP, the virtual server to listen to port 21. The client uses this port to initiate command and control communications. When the client has established a connection to the BIG-IP device, the BIG-IP creates another, separate connection to the FTP server. Since active mode is being used, the FTP server will then attempt to initiate a separate data connection towards the client on port 20. This data connection towards the client will naturally pass back through the BIG-IP device. Since it is a default deny device, the connection will be dropped. The BIG-IP device does not expect to see a connection attempt from the FTP server on port 20. However, if we assign a FTP profile (with appropriate settings) to the virtual server, then it becomes ‘aware’ of how the active FTP works. With the profile assigned, the FTP server’s connection is expected and thus accepted through the BIG-IP device.

Profile Types There are several profile types, roughly grouped by their general functionality. There are many pre-configured profiles included with a BIG-IP device but you can also create your own custom ones. Custom profiles are created using one of the pre-configured profiles as a template. The profile types are listed next.

237 237


Protocol Profiles All virtual servers must have at least one protocol profile assigned, and these profiles typically specify low level parameters such as timeouts and connection management attributes. The BIG-IP device will automatically assign a default client-side and server-side protocol profile depending on the primary protocol that is assigned to a virtual server. Protocol profiles include: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Fast L4 Fast HTTP SCTP TCP tcp-lan-optimized tcp-mobile-optimized tcp-wan-optimized UDP

Persistence Profiles We have not covered persistence yet, but these profiles ensure that client traffic is sent to the same pool member throughout a session. Persistence profiles include; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Source Address Cookie SSL Hash Microsoft RDP Destination Address SIP Universal

SSL Profiles In order to intercept SSL traffic, you need to assign SSL profiles to a virtual server. The SSL profiles intelligently control the SSL traffic which enables you to either offload the SSL session or perform end-to-end encryption on the BIG-IP device. We’ll discuss this in greater detail soon in the SSL section. The available SSL profiles are: ▪ ▪

ClientSSL ServerSSL

Application (Services) Profiles These profiles control traffic at the application layer. For instance, they can enable the BIG-IP device to read cookies within HTTP traffic or insert HTTP headers. They might also enable content compression. Application profiles include: ▪ ▪ ▪ ▪ ▪ ▪ ▪

HTTP HTTPS FTP DNS SIP Diameter RADIUS

238 238


▪ ▪ ▪

RTSP XML SPDY

Remote Server Authentication Profiles This profile type enables a BIG-IP device to authenticate client traffic using underlying authentication technology (Pluggable Authentication Modules - PAM). Authentication profiles include: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

LDAP RADIUS TACACS+ SSL OCSP SSL/LDAP CRLDP Kerberos Delegation XMP

Analytics Profile These are used to collects statistics from virtual servers. We’ll discuss analytics in greater detail later in the book. There is currently only one available analytics profile: ▪

Analytics

Other Profiles Special purpose profiles are used for more unique, uncommon, less easily categorised technologies. Some examples of these profiles are; ▪ ▪ ▪ ▪ ▪ ▪

OneConnect Stream Request Logging NTLM Statistics DNS Logging

Profile Dependencies Before assigning profiles to virtual servers, you need to understand the dependencies between them. This is because profiles can both benefit and contradict each other. Some profiles are dependent on others, some cannot be used together. There are a few rules that you need to keep in mind: ▪ ▪ ▪

The profiles used at the higher layers of the OSI model are often dependent on the profiles that operate at layers beneath them. Profiles that operate on the same layer of the OSI model layer are very often exclusive to that layer and cannot co-exist on the same virtual server. All virtual servers have a protocol profile assigned to them, for instance a TCP profile.

239 239


Using the example shown in the following diagram, this virtual server is configured to use the TCP profile at the Transport Layer. Since the UDP protocol also operates at the Transport Layer it is not possible for it to co-exist with the TCP profile; only one of the two can be assigned to the virtual server. Equally, since the HTTP and SIP profiles both operate at the application layer, they cannot co-exist. Therefore, in our example we have selected only the HTTP profile. Some applications might run over both TCP and UDP and in these cases, you must create two separate virtual servers. One configured with a TCP profile and one with a UDP profile.

As we mentioned earlier, some profiles are dependent on each other. Again, in our example, we have selected cookie persistence. Cookie persistence is part of the HTTP protocol and is therefore dependent on the HTTP profile. If we did not configure our virtual server to use an HTTP profile, it would not know what a cookie is. Also, since the HTTP profile operates using the TCP protocol we also need to configure our virtual server with a TCP profile. This is all summarised in our example below:

Depending on the application, the profiles you choose can be very different. In the next example, we intend to create a SIP virtual server. SIP usually operates using the UDP protocol, therefore we choose the UDP profile. Since we have selected this profile, we cannot use the TCP profile because they cannot co-exist. On the application layer, we assign the SIP profile to the virtual server. Again, since HTTP and SIP operate on the same layer they cannot co-exist. You will find our example below:

240 240


In summary, all of the profiles that are assigned to a virtual server define how it will handle the traffic it receives. The different profiles provide (and assign) the intelligence of the BIG-IP device, which enables one or more varied different features.

241 241


Default and Custom Profiles Like monitors, a BIG-IP device comes with many default profiles, which you can use as they are. The default profiles are stored in the /config/profiles_base.conf file; and should never be deleted. If you need to change any of the settings in a default profile, you can and should create a custom profile which inherits its configuration from the default profile. You should never modify a default profile! In some scenarios, you might want to change a setting in a custom profile but it is presently in use by several other virtual servers. In this case you will need to create a second unique custom profile based on the parent profile. This second profile inherits the configuration of the parent. After you created the new profile, change whatever settings you desire. That way you don’t affect the old profile but still have the ability to make custom settings for that particular profile. As you can see in the following diagram, profiles are linked together in a parent-child hierarchy.

Examining the following example, the child profile contains a check box on the right of each configurable setting. Whenever you check that checkbox, that configuration is now considered to be customised. This means that if you change this value on the child profile, it will not affect the parent profile. Equally, a change on the parent profile will not affect the child for this particular setting.

242 242


With the settings that do not have a check box activated, they will be inherited by the child from the parent. In the following example, we have created two custom cookie persistence profiles, one (the parent) configured to use the Cookie Name chocolatechip and the child profile configured to use the Cookie Name macadamian. Since this setting in both profiles has a check box activated, these settings are ‘locked’ for these profiles and will not affect each other. However, the Expiration setting does not have its checkbox activated. This means that if you change this value in the custom_cookie_1 profile, then this setting will be inherited by the custom_cookie_2 profile.

243 243


If you check the Custom checkbox at the top of the profile, all of the settings within that child become customisable. This means that even if you leave the values blank in the child profile, it will not inherit any settings from its parent.

Creating a Custom Profile When you create a custom profile, you must give it a name and also choose a ‘source’ profile that will act as its parent. The parent can be a default profile, or it can be an existing custom profile. The profile you create will become a child of the profile that you choose. It is very important when you modify a parent profile to keep in mind that its settings may be inherited by any child profiles, throughout their existence (unless a setting has been specifically changed in the child). Do not make any changes to a parent profile unless you fully understand which child profiles and virtual servers you might affect.

Deleting a Custom Profile When you want to delete a custom profile, you must first ensure that it is not a parent to any other profile. As long as a profile is a parent to another profile, it cannot be deleted.

Assigning Profiles to a Virtual Server Once you have created and/or selected the profiles you need for a virtual server, you can then assign them to it, and whenever the virtual server receives the expected type of traffic, it will be able understand and interpret that traffic and deal with it appropriately. Since some applications use multiple protocols, you might have to assign multiple profiles to the virtual server. For instance, if you are creating an HTTPS virtual server, which will handle SSL termination it will need in total three profiles.

244 244


We’ll discuss SSL/TLS in greater detail later on in this book.

It will need a TCP profile, a HTTP profile and an SSL profile. You can use the WebGUI or tmsh to assign the profiles to the virtual server. Using the WebGUI, the profiles can be assigned from the different protocol sections under the Configuration section. This is shown below:

Some profiles are hidden within the Advanced configuration tab. The Content Rewrite and Acceleration profiles have their own sections further down the configuration page. Persistence profiles are assigned through the Resources tab on the virtual server page.

245 245


Lab Exercises: Profiles Exercise 5.1 – Compression Exercise Summary In this exercise, we’ll configure a new HTTP compression profile and assign it to a virtual server and then observe its behaviour. In this lab, we’ll perform the following: ▪ ▪ ▪

Create a custom HTTP compression profile Assign the profile to a virtual server Observe the behaviour

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: ▪ ▪

Network access to the BIG-IP system’s management port. One or more servers configured on the internal network that serve HTTP content that can be compressed. This should already have been configured during the Building a Test Lab chapter.

246 246


Testing Without Compression In this lab, we’ll only be using pool member 172.16.100.1 as this is the only one containing content that can be compressed. 1. 2.

Open up a browser session to https://192.168.1.245/ and login using the admin credentials. Navigate to Local Traffic > Pools and create a new pool containing the following configuration:

Local Traffic > Pools : Pool List > New Pool... Configuration Name compression_pool Load Balancing Method Round Robin New Members Click on the Node List button and use the pull-down menu and select the following members: Address: 172.16.100.1 Service Port: 80 Click Add When done, click Finished 3.

Navigate to Local Traffic > Virtual Servers and create a new virtual server containing the following configuration:

Local Traffic > Virtual Servers: Virtual Server List > New Virtual Server… General Properties Name vs_compression Type Standard Destination 10.10.1.101 Service Port 80 or select HTTP Resources Default Pool compression_pool When done, click Finished 4. 5. 6. 7.

Open up a browser session to: http://10.10.1.101/compress.html. Do note that this is case sensitive. Navigate to Statistics > Module Statistics > Local Traffic and select Statistics Type: Virtual Servers. What is the value for the outbound data (Bits Out)? Record the value: ______ Clear the statistics by selecting vs_compression and click Reset.

Creating a Compression Profile 1. 2.

Navigate to Local Traffic > Profiles > Services > HTTP Compression and in the upper right corner press Create. On the Local Traffic > Profiles > Services > HTTP Compression > New HTTP Compression Profile… page, add the following configuration:

247 247


Local Traffic > Profiles > Services > HTTP Compression > New HTTP Compression Profile… General Properties Name my_compression_profile Parent Profile httpcompression When done, click Finished 3.

Navigate to Local Traffic > Virtual Servers > vs_compression and assign the HTTP Compression Profile to the virtual server. Notice that the HTTP compression profile option is greyed out. Why? This is because of a profile dependency, as we discussed previously. In order for the virtual server to compress HTTP content, it has to understand HTTP. Since we do not have an HTTP profile assigned to the virtual server, it currently can’t

4.

Assign the HTTP compression profile by adding the following configuration to the virtual server:

Local Traffic > Virtual Servers: Virtual Server List > vs_compression Configuration HTTP Profile http HTTP Compression Profile my_compression_profile When done, click Update 5. 6. 7. 8.

Again, open up a browser session to: http://10.10.1.101/compress.html. Do note that this is case sensitive. Navigate to Statistics > Module Statistics > Local Traffic and select Statistics Type: Virtual Servers. What are the results for the outbound data now? (Bits Out?): ______ As you can see, the outbound data value is much lower when the compression profile is assigned to the virtual server.

Exercise 5.2 – Web Acceleration Exercise Summary In this exercise, we’ll experiment with caching and the streaming options available on the BIG-IP system using a web acceleration profile. In this lab, we’ll perform the following: ▪ ▪ ▪

Create a custom HTTP Acceleration Profile. Assign the profile to a virtual server Observe the behaviour.

248 248


Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. One or more servers configured on the internal network that serve HTTP content. This should have already been configured during the Building a Test Lab chapter. A terminal client such as PuTTY. The virtual server vs_http has the following configuration:

▪ ▪ ▪ ▪

Local Traffic > Virtual Servers: Virtual Server List > vs_http Configuration HTTP Profile Blank Resources Default Pool http_pool When done, click Update ▪ ▪

Clear the statistics for the virtual server vs_http Clear the statistics for the pool http_pool

Testing Without Web Acceleration 1. 2. 3. 4. 5.

Open up a browser session to https://192.168.1.245/ and login using the admin credentials. Navigate to Statistics > Module Statistics > Local Traffic > Statistics Type: Virtual Servers and clear the statistics: Change the Statistics Type to Pools and clear the statistics. Open up a browser session to http://10.10.1.100/ and refresh the page 5-10 times. Navigate back to Statistics > Module Statistics > Local Traffic > Statistics Type: Virtual Servers and record the following:

Virtual Server vs_http

6.

Change Statistics Type to Pools and record the following:

Pool/Member http_pool 172.16.100.1:80 172.16.100.2:80 172.16.100.3:80

249 249

Result Connections: Maximum: Connections: Total:

Result Connections: Maximum: Connections: Total: Connections: Maximum: Connections: Total: Connections: Maximum: Connections: Total: Connections: Maximum: Connections: Total:


Creating a Web Acceleration Profile 1. 2.

Navigate to Local Traffic > Profiles > Services > Web Acceleration and in the upper right corner press Create. On the Local Traffic > Profiles > Services > Web Acceleration > New Web Acceleration Profile… page, add the following configuration:

Local Traffic > Profiles > Services > Web Acceleration > New Web Acceleration Profile… General Properties Name my_acceleration_profile Parent Profile webacceleration When done, click Finished 3.

4. 5. 6. 7. 8. 9.

Navigate to Local Traffic > Virtual Servers > vs_http and assign the newly created my_acceleration_profile to the virtual server by selecting it in the Web Acceleration Profile list. Do not forget to assign the virtual server an HTTP profile. Otherwise the Web Acceleration Profile section will be greyed out. When the Web Acceleration Profile has been assigned, click Update. Again, navigate to the Statistics > Module Statistics > Local Traffic and clear the statistics for the Virtual Servers and the Pools. Open up a new browser session to http://10.10.1.100/ and refresh the page 5-10 times. Review the statistics once again and compare it with the earlier results. Is there a difference? Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. Log on using the account root and the password f5training. View the cache statistics by entering the following command:

[root@bigip1:Active:Standalone] config # tmsh show /ltm profile ramcache all 10. Answer the following questions:

Question How many URI’s are stored in the cache? What are the sizes of the URI’s that are stored in the cache? What type of files (eg. *.gif, *.jpg) are stored in the cache?

Answer

11. Clear the cache by entering the following command:

[root@bigip1:Active:Standalone] config # tmsh delete /ltm profile ramcache my_acceleration_profile Modifying the Web Acceleration Profile The Web Acceleration Profile can be modified in order to exclude certain objects or not cache objects of a certain size. This way, you can configure the web acceleration profile to perfectly fulfil your needs. 1.

Navigate to Local Traffic > Profiles > Services > Web Acceleration > my_acceleration_profile and change the following settings:

250 250


Local Traffic > Profiles : Services : Web Acceleration: my_acceleration_profile Cache Settings Minimum Object Size 10000 bytes URI Caching URI Lists‌ URI: /*.png Exclude When done, click Update When you are done, it should look like the following picture:

The options will be greyed out until you select the Custom checkbox for each configuration entry. 2. 3. 4. 5.

Clear the statistics once again for the virtual server vs_http and the pool http_pool. Open up a new browser session to http://10.10.1.100/ and refresh the page 5-10 times. Review the statistics once again and compare them to the previous result. Is there any difference? View the cache once again using the tmsh command:

[root@bigip1:Active:Standalone] config # tmsh show /ltm profile ramcache all

251 251


6.

Answer the following questions:

Question How many URI’s are stored in the cache? What are the sizes of the URI’s that are stored in the cache? What type of files (eg. *.gif, *.jpg) are stored in the cache?

Answer

Is there any difference?

Clean-Up ▪

Remove the my_acceleration_profile profile from the virtual server vs_http.

Exercise 5.3 – Stream Profile Exercise Summary In this exercise, we’ll experiment with the Stream profile and modify the data being sent from the pool members before forwarding it to the client. In this lab, we’ll perform the following: ▪ ▪ ▪

Create a custom Stream profile Assign the profile to a virtual server Observe the behaviour

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. One or more servers configured on the internal network that serve HTTP content. This should have already been configured during the Building a Test Lab chapter.

▪ ▪

Creating the Stream Profile 1. 2. 3.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Local Traffic > Profiles > Other > Stream and in the upper left corner click on Create. On the Local Traffic > Profiles > Other > Stream > New Stream Profile… page, add the following configuration:

Local Traffic > Profiles > Other > Stream > New Stream Profile… General Properties Name my_stream_profile Parent Profile stream Settings Source Server Target Node When done, click Finished This will cause the BIG-IP system to change Server to Node every time it finds it in a response, before passing it along to the client.

252 252


4.

Navigate to Local Traffic > Virtual Servers > vs_http and change the configuration to the following:

Local Traffic > Virtual Servers: Virtual Server List > vs_http Configuration Configuration Advanced HTTP Profile http Stream Profile my_stream_profile When done, click Update 5. 6.

Verify the configuration by opening up a browser session to http://10.10.1.100/. Notice that instead of stating Server 1-3 it now states Node 1-3. Refresh the page a couple of times using Ctrl+F5 to make sure that all pool members are affected by the change.

Clean-Up 1.

Navigate to the Local Traffic > Virtual Servers > vs_http and remove the following profiles: ▪ my_stream_profile ▪ http profile

Chapter Summary ▪

Profiles are configuration objects that allow you to define and control how the system processes different types of traffic, protocols and applications. Profiles can be extremely powerful and are the primary mechanism used to assign advanced functionality beyond ‘basic’ load balancing. They are assigned to one or more virtual servers.

The profiles used at a specific layer of the OSI model are often dependent on the profiles that operate at layers beneath it.

Profiles that operate on the same layer of the OSI model layer are very often exclusive to that layer and cannot co-exist on the same virtual server.

All virtual servers have a protocol profile assigned to them, for instance, a TCP profile.

Some profiles are dependent on each other. For instance, the Cookie persistence profile is part of the HTTP protocol and is therefore dependent on the HTTP profile. If we did not configure our virtual server to use an HTTP profile, it would not know what a cookie is. Also, since the HTTP profile operates using the TCP protocol we also need to configure our virtual server with a TCP profile

You should never modify a default profile!

253 253


Chapter Review 1. Which of the following profiles is an example of a Protocol Profile? a. b. c. d.

TCP Profile FTP Profile HTTP Profile DNS Profile

2. Which of the following profiles is an example of a Persistence Profile a. b. c. d.

UDP Profile RADIUS Profile Cookie Profile SPDY Profile

3. When trying to apply a Cookie Persistence Profile to a Virtual Server, you are prompted with an error. What could be the problem? a. b. c. d.

The virtual server is missing an iRule. You have not assigned a Default Pool. The pool members in the pool do not support Cookie Persistence. The virtual server do not have a HTTP profile assigned.

254 254


255 255


Chapter Review: Answers 1. Which of the following profiles is an example of a Protocol Profile? a. b. c. d.

TCP Profile FTP Profile HTTP Profile DNS Profile

The correct answer is: a 2. Which of the following profiles is an example of a Persistence Profile a. b. c. d.

UDP Profile RADIUS Profile Cookie Profile SPDY Profile

The correct answer is: c 3. When trying to apply a Cookie Persistence Profile to a Virtual Server, you are prompted with an error. What could be the problem? a. b. c. d.

The virtual server is missing an iRule. You have not assigned a Default Pool. The pool members in the pool does not support Cookie Persistence. The virtual server do not have a HTTP profile assigned.

The correct answer is: d Some profiles are dependent on each other. The Cookie Persistence Profile is persisting connections based on the Cookie HTTP Header inside a HTTP request. In order for the BIG-IP to read the Cookie HTTP Header it needs to understand HTTP. Therefore, you need to assign a HTTP Profile to the virtual server.

256 256


9. Persistence Concept of Stateless and Stateful Applications In the world of networking, the concepts of stateless and stateful communications are often used. In stateless communication, there are no records of previous interactions between the client and the server, and traffic is not monitored at all. It is simply a request and response type of traffic flow where each request/response pair is unrelated to the other. However, in stateful communication, both the client and the server interact with each other and keep track of the current state. Originally, HTTP was designed to be a stateless protocol where the client would request and receive web pages from a web server. Much has changed since HTTP was first designed. Using an ordinary shopping site, the user will be able to view pages, add items to their shopping cart and pay for those items using a checkout process. All of these interactions are considered stateful in terms of the end-server containing information of what actions the user has already performed, and therefore, it is very important to make sure the client’s requests always end up at the same end-server.

Sessions Just to ensure clarity, let’s define what a session is. An Application Session is the communication channel between two hosts, used to exchange information and complete transactions of some kind. It can be comprised of one or more underlying TCP connections between the client and server (virtual or real). A session is typically stateful, with various parameters and variables (including unique IDs and authentication information) assigned and valid only for a particular session. When a client connects to a web server for the first time, a session is created on the server. The developers that created the website can use the session to store application data such as shopping cart number, customer ID or even information on how the site should be presented. Using only one server the client will always end up on the same server and the session data relevant to that client will be available. The scenario changes rapidly when using a pool of servers.

Stateful Communication With Load Balancing One of the core features of the BIG-IP LTM is its load balancing capabilities. Instead of having one single server to handle the request we can have multiple identical servers that can handle the requests. What happens to traffic using this scenario? When a client sends a request, it will be handled by the virtual server configured on the BIG-IP system. It will receive the request and establish a TCP 3-Way handshake with the client and then select a pool member based on the load balancing algorithm configured. At first, the application will most likely function as it should, but when the client sends a new request to the BIG-IP system it might not end up on the same pool member. If a specific configuration setting has not been applied to the virtual server, the requests will go through the same load balancing algorithm and a new pool member will be chosen. This means that the session data that was created on the first pool member is no longer available as the client ended up on a new server. If the application in question is a shopping site, the client would lose its shopping cart among other session data.

257 257


In this scenario the desired behaviour is to have all client requests end up on the same server that initially handled the first request.

What is Persistence? Persistence, also known as stickiness, affinity, or session persistence, is used to direct additional requests and connections from a client to a virtual server to the same real server as the initial (first) connection. This ensures that any state information that is stored only on that real server will be available to the client. The types of traffic or applications with which persistence is generally used include; web applications, SIP & other voice technologies and Remote Access. In most cases, any protocol or application that requires authentication that is performed by the real server and not shared between all real servers will require persistence. Persistence is configured on the virtual server using a persistence profile. The persistence profile tells the BIG-IP system to send requests to the same initial pool member based on the information it received through one of the many persistence methods.

Persistence only applies after the first load balancing decision is made.

Persistence Methods There are a fair number of persistence methods available including: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Source Address (aka Simple) Cookie Destination Address Hash Microsoft Remote Desktop Protocol (RDP) SIP SSL Universal

Source Address (aka Simple) Persistence This method persists connections based on the source IP address or a range of source addresses, which allows all connections from a particular IP subnet to be persisted together to the same real server. When a client connects to a virtual server on the BIG-IP system, it will check its persistence table to see if the client’s IP address matches an IP address or IP address range against an existing persistence record. If there is a match, the BIG-IP system will direct the traffic to the pool member specified in the persistence record. This is illustrated in the diagram below:

258 258


If there are no existing persistence records that matches the client’s IP address, the connection will go through the configured load balancing algorithm and a pool member will be selected. Once the member has been selected, a persistence record will be created. The persistence record contains the following information. ▪ ▪ ▪ ▪ ▪

Persistence Value – The IP address or range for which the persistence applies. Persistence Mode – The persistence method being used, in this case, Source Address. Virtual Server – The virtual server the persistence record applies to. Pool – The name of the pool that the pool member is in. Pool Member – The IP address and port of the pool member that will receive the traffic. Age – How long the persistence record has existed. (Measured in seconds)

259 259


Traffic will be directed to the same pool member unless; â–Ş â–Ş â–Ş

The persistence record times out The pool member or node fails a health monitor The pool member or node is in a Forced Offline state

Persistence Record Idle Timeout In order to preserve memory and CPU resources, local persistence records have an idle-timeout. If a record has not been used (looked up) within the idle-timeout period, it is removed from the persistence table. Should this occur, the next connection or request seen from the client (if there is one) will be load balanced normally, potentially to a different pool member. A new persistence record will then be created. The default idle-timeout (for methods that store entries in the persistence table) is typically 180s or 300s by default, depending on the method. This value can be set to Indefinite if need be, but this is not recommended as the persistence table cannot shrink, only grow. It is good practice to set the idle-timeout to a value at least equal to that of the underlying layer three protocol being used; this is 300s for TCP and 60s for UDP by default. Session closure events (sessions may be composed of more than one connection) can, of course, also remove entries from the persistence table. When the persistence record is created, the age value will start to count upwards to the configured timeout value and once it reaches the timeout it will be removed from the persistence table. But as long as new requests that match the persistence record are received on the virtual server, the age value will be reset and returned to 0.

Using the Mask Setting When using the default source address affinity persistence profile, it will create a new persistence record for each new IP address based on a 32-bit value (255.255.255.255). This means that IP addresses 212.113.45.19 and 212.113.45.20 will each have their own persistence record. This may have a performance impact on your BIG-IP system and you could perhaps consider using a mask that has a wider range of matches. For instance, if you modify the Mask setting to 255.255.255.0 then the two IP addresses would match the mask and only one persistence record would be created. The persistence record value would then look like this: 212.113.45.0. Limitations of Source Address Affinity Persistence There are several benefits when using Source Address Affinity Persistence.

260 260


This method is strong but has a very simplistic nature. It works at layer 3, which means it is not dependent on application data to create persistence records, such as cookie persistence. As long as the client source IP address does not change during the persistence record’s timeout period, traffic will be directed to the same pool member. If the pool member or node goes offline, then traffic will be load balanced to a different pool member. Even though Source Address persistence is simple and effective, it does have its downsides. One common problem that you will most likely run into is when multiple clients connect to a virtual server through a proxy or a device that has the ability to NAT traffic. Then all of the requests will appear to come from the same source IP address and thus create a single persistence record for all of the clients. This may create a very uneven load on your pool members. This is illustrated in the following diagram:

261 261


In this example, we can see that in total there are 8 different clients using a site, however, only two persistence records are created. This is because all of the traffic from the clients are translated into either the IP address 82.14.213.15 or 217.108.24.12, thus only creating two persistence records.

Cookie Persistence A persistence method that does work at the application layer is cookie persistence. This method is used for a very specific application layer protocol, namely: HTTP. It uses a cookie stored on the client (which is sent with every relevant request) to identify to which pool member it has previously connected. For HTTP based traffic, this is the preferred persistence method as it overcomes the potential downfalls of Source Address and SSL persistence (more on that later). This method can work in a number of ways but all of them rely on the inspection of a HTTP Cookie. Cookie Persistence can only be used with a Standard or Performance (HTTP) Virtual Server as, the assignment of a HTTP Profile is required. Unfortunately, this method can’t be used with HTML v5 Web Sockets as HTTP Headers are not used, however, SPDY and HTTP/2.0 do support this method. A cookie is sent to a client using a Set-Cookie HTTP header like this:

Set-Cookie: Some_Name=Some_Value; expires=Mon, 21-Aug-2017 10:56:58 GMT; path=/; domain=example.com; HttpOnly The first part of the header’s contents is one or more key value pairs, in this case, the key is Some_Name and the value is Some_Value. When we refer to the name of the cookie, we are actually referring to name of the key, in this case Some_Name. The expires, path, domain and HttpOnly parameters provide information to the client about how the cookie should be handled and are all optional. The client then returns the cookie using a Cookie HTTP header like this (without the parameters):

Cookie: Some_Name=Some_Value Currently, there are four different types of Cookie persistence methods; these are described next. Cookie Insert The BIG-IP inserts a Set-Cookie: header into the first HTTP response from the pool members until the cookie has expired. When the cookie has expired, and the client sends a new HTTP request, the BIG-IP will load balance the client to a new pool member and generate a new cookie that will be included in the HTTP response. This is the default behaviour and if you need to configure the BIG-IP to always send a cookie, you can do so by enabling the Always Send Cookie which is described in the following section. The cookie can be configured to use a name of your choosing and an encoded value. Cookie Insert is the default type for the Cookie persistence method. If you do not specify a name, one will be automatically generated based on the name of the pool assigned to the virtual server, as follows: BIGipServerpool_name.

262 262


263 263


Always Send Cookie As mentioned earlier, the default behaviour of the Cookie Insert Method is to insert the cookie into only the initial response. This behaviour is controlled by the Always Send Cookie setting which is disabled by default. Once a web browser has successfully received a cookie, there is really no need for the BIG-IP system to resend the cookie unless you have configured a specific timeout value for the cookie. By default, the Expiration setting for an inserted cookie is set to Session Cookie which means that the cookie will expire once the web browser (or tab) has been closed. Since there is no need to update a cookie that never expires, and the persisted pool member value will always stay the same (under normal conditions), resending the cookie will create additional and unnecessary traffic. However, if an Expiration is configured for the cookie and you want the expiration time to be updated with each client request, then the Always Send Cookie setting should be enabled. This means that for every new request the client makes, a new cookie with an updated expiration value is sent in every response. If the Always Send Cookie setting is disabled, the client has a limited window within which persistence is available and once the cookie expires, the client could end up on a different pool member and a new cookie is received with a new expiration time. Cookie Rewrite Method The Cookie Rewrite Method intercepts a Set-Cookie header that is named BIGipCookie, as follows:

Set-Cookie: BIGipCookie This cookie is sent from the end-server (which must be configured to send it) to the client but it is intercepted by the BIG-IP system which then overwrites the cookie name and its value. The cookie will be transformed into this:

Set-Cookie: BIGipServerpool_name=1677787402.36895

264 264


265 265


Cookie Passive Method The Cookie Passive Method is exactly what the name implies. This method will not insert, search or modify any SetCookie headers. Instead, all cookies will be allowed to pass through the BIG-IP system unaltered. All of the responsibility is transferred to the end-server to provide a cookie containing the pool name and the corresponding server address and port. The BIG-IP system then forwards this cookie to the client unaltered. When the cookie is sent in client requests its value is used to direct traffic to the correct pool member. F5 recommend using Rewrite over this method as it inserts the relevant information automatically. The Pool Member configuration is independent of the BIG-IP.

266 266


Cookie Hash With this method, the Pool Member provides/inserts a HTTP Set-Cookie header, which the BIG-IP parses and applies a configured Hash Offset and Hash Length calculation in order to create a hash value that will be stored in the persistence table. This is the only Cookie persistence method that results in persistence records being created and stored on the device. When the virtual server is configured to use the Cookie Hash method, the following events occur: 1.

The BIG-IP system accepts the client request and parses the HTTP request for a cookie header that matches the Cookie Name setting of the Cookie Hash Persistence Profile. Therefore, it is required to configure a Cookie Name when using the Cookie Hash Method.

2.

If the HTTP request does not contain the specific cookie, the client will be load balanced to a pool member based on the load balancing method configured for the pool.

3.

When the pool member sends its response, the BIG-IP system parses the HTTP response for the HTTP SetCookie header containing the Cookie Name and creates a hash value based on that cookie’s value. The Hash Offset is the number of characters in the header contents the BIG-IP should skip before calculating the hash value. The Hash Length is the number of characters to include when calculating the hash value. The default Hash Offset is 0 (zero), meaning that it will calculate the hash based on the very beginning of the cookie’s content; the default Hash Length is also 0 (zero), meaning that it will use all the cookie’s contents.

4.

Once the hash value has been calculated, the BIG-IP system will store this as a value in its persistence table. The cookie sent by the pool member will be forwarded to the client.

5.

When the client sends an additional request containing the cookie that was previously returned, the BIG-IP will apply the same Hash Offset and Hash Length in order to calculate the hash value of the cookie. This hash value is then checked against the persistence table. If the value matches an existing persistence record, the BIG-IP system persists the request to the specified pool member.

Cookie Insert Information Leakage The information inside the cookies are encoded, yet some organisations still feel that it is unsafe to send these cookies to the clients, and I agree with them. Any penetration testing software would flag that the information in the cookie is a potential security exposure because you can just reverse engineer the encoding and get information out of the cookie, and that could potentially damage your organisation. The good thing, though, is that there are security measures that can be taken in order to prevent the information leakage. You can create a custom cookie persistence profile and perform the following steps: ▪

Change the name of the cookie – If the Cookie Name is left blank it will automatically default to the BIGipServer[pool_name] name. However, when you specify a name for your cookie, make sure that it does not conflict with other cookies that the application is already using.

267 267


Enable cookie encryption – Change the Cookie Encryption Use Policy to either Preferred or Required and then specify an Encryption Passphrase. This option will encrypt the cookies using a 192-bit AES cipher and then encodes it using the Base64 encoding scheme.

The BIG-IP system can encrypt the persistence cookie and any other cookie that the server sends back to the client that might contain sensitive information. Cookie Expiration With the Insert and Rewrite methods, you can specify a suitable expiration time and date or a Session cookie. A session cookie is somewhat more secure as it is held in memory and not stored on disk by most browsers. The cookie is deleted when the session is closed or ends for any reason. To ensure expirations are honoured, the device’s time and time zone should be accurate.

A cookie could easily be modified by a client to bypass the expiration timeout.

With the Cookie Hash method, a non-zero timeout (default 180s) is required as with most other Persistence types where records are created and stored on the device. The Default Cookie Validity Time The validity (or lifetime) for cookie persistence depends on the method used. Cookie Hash The Cookie Hash is configured with an idle timeout of 180 seconds by default. Cookie Insert The Cookie Insert is by default configured with a Session Cookie. The cookie remains valid for the lifetime of the current session only. This means the cookie is normally not stored on disk of the client, only in memory. Alternatively, you can configure a specific number of days, hours, minutes and seconds for which the cookie is valid. Cookie Passive The presence and contents (and thus the timeout) of the cookie are configured on the end-server and are not controlled by the BIG-IP. Cookie Rewrite The Cookie Rewrite method is configured with a Session Cookie by default. The cookie remains valid for the lifetime of the current session only. This means the cookie is normally not stored on disk of the client, only in memory. Alternatively, you can configure a specific number of days, hours, minutes and seconds for which the cookie is valid.

268 268


The Insert, Rewrite and Passive methods cannot have records mirrored to another BIG-IP device (configured as an HA-pair) as no records are created and stored on the device. This also means that no statistics are available.

Destination Address Persistence This persistence method operates based on the destination IP address of a connection; where all requests from any client to a specific destination address are directed to the same pool member. Destination address affinity is most useful with wildcard Virtual servers and caches, where different users that request the same content are directed to the same cache. For HTTP traffic, OneConnect is recommended to ensure that each request is load balanced uniquely, rather than each connection. This should help increase the cache hit rate. This method continues to work even when the client source IP address changes.

Hash Persistence Hash Persistence uses an iRule or in-profile hash parameters to find one or more values in the request/response header or its payload, and generates a hash based on this. The values could be Source IP, Destination IP, Destination Port, to name a few. The BIG-IP uses this hash value to persist the traffic. The Persistence method still needs to be specified in the iRule even though a Hash persistence profile is used and assigned. This method works well with caches when the persistence is based on the requested content. For HTTP traffic, OneConnect is recommended to ensure each request is load balanced uniquely, not each connection. If the client session state needs to be maintained, then this persistence method is not recommended unless the iRule or profile locates session-unique related data. This method continues to work even when the client source IP address changes. Here is a simple example of an iRule using a basic hash; there are also iRules available for Typical Hash, Election Hash:

when HTTP_REQUEST { persist hash [HTTP::uri] } Note that Hash persistence does not store any persistence records.

269 269


Universal Persistence The Universal Persistence Method is very similar to Hash Persistence except that instead of creating a hash value, it persists based upon an actual value in the request/response or payload. It uses an iRule to locate this unique (and repeating) data. This data could be a HTTP Cookie value, the HTTP X-Forwarded-For header value, a string within a URI or something else entirely unrelated to HTTP. The data itself is stored in the persistence record, so its size should be kept to a minimum. This method works well with caches when the persistence is based on the requested content. For HTTP traffic OneConnect is recommended to ensure each request is load balanced uniquely, not for each connection. If the client session state needs to be maintained, this would not be a suitable persistence method unless the iRule locates session-unique related data. This method continues to work even when the client source IP address changes.

Other Persistence Profiles Persistence Method RDP SIP

SSL

Description Persistence is based on the unique Remote Desktop session identifier within a packet. This method continues to work even if the client source IP address changes. Based on the Session Initiation Protocol (SIP), session ID used between the client and the Pool Member. This method continues to work even if the client source IP address changes. Persists based on the SSL/TLS session ID used between the client and the BIG-IP or between the BIG-IP and the Pool Member. This method continues to work even if the client source IP address changes.

SSL Renegotiation may break SSL persistence.

Single Node Persistence There are some scenarios where you might want to use only a single pool member in a pool (that contains multiple members) at a time and this is called Single Node Persistence. The requirement is that the BIG-IP system should initially only direct traffic to pool member A. If pool member A experiences a failure, then traffic should go to pool member B. When pool member A recovers, traffic is still only sent to pool member B. It is not until pool member B experiences a failure that traffic will be directed back to pool member A again. This requirement is configured as follows: First create the following iRule:

rule PriorityFailover { when CLIENT_ACCEPTED { persist uie 1 } }

270 270


Secondly, create a new Universal Persistence Profile and select the iRule that you just created. Remember to set a high timeout value so that the persistence record will not timeout under typical traffic conditions.

Now go ahead and create your pool and assign your pool members to it. After that, create the virtual server and assign the pool you created and under Default Persistence Profile choose the profile: PriorityFailover. What will happen is that the first connection (after a load balancing decision is made) will create a single universal persistence record with a key of 1. Every subsequent connection will cause a look up in the persistence table using 1 as the key and thus match the original connection’s record and be directed to the same pool member. You do not necessarily need to specify 1 as the key, any constant number or string will do. When one pool member fails, it will be marked down. Traffic will then be directed to the other pool member and a new persistence record will be created which causes all of the subsequent requests to end up at the new pool member. Even when the previously failed pool member becomes available again, the persistence record will still ‘point’ to the other server, causing all traffic to continue to go to the other member. The whole process is described in the following diagrams:

271 271


The persistence record of 1 has been created which means that all requests will be directed to pool member: 172.16.0.1:80.

272 272


Then pool member 172.16.0.1:80 fails. This means that a new connection is established towards a new member, in our case 172.16.0.3:80. The existing persistence record will then be modified to use the new address instead meaning that all subsequent requests are instead sent to that server.

273 273


Now pool member 172.16.0.1:80 has been marked as available again but since the persistence record is still active, all traffic will still be directed to the pool member 172.16.0.3:80. Using this method, you introduce the concept of “primary” and “secondary” that direct all traffic to one available member and keep the traffic on that member as long as it is available. The same concept can, to some extent, be implemented using Priority Group Activation.

274 274


The only difference is that as soon as the offline pool member is marked as online again, new connections will be sent to that pool member and only active connections will remain on the previous pool member until they time out. This creates more of a failback scenario which some administrators do not want.

Configuration Verification Persistence is applied at the Virtual server level, and both primary and fallback methods can be specified (see the next section). This can be confirmed in the WebGUI by navigating to Local Traffic > Virtual Servers > vs_name > Resources.

If you would prefer to use the command line and tmsh, use the following command:

$ [tmsh] list ltm virtual virtual_name

Primary & Fallback Methods In case the configured persistence method fails for some reason, it is possible to assign a fallback persistence method. The available fallback methods are limited to: â–Ş â–Ş

Source Address Destination Address

Examples of failure of the primary Persistence method include a missing Persistence cookie or missing data (such as a JSESSIONID) required by the Universal method.

275 275


The fallback method creates persistence records continuously (as if it were the primary method) to ensure that it can ‘take over’ immediately. The presence of these records does not indicate that the primary method has failed. These records are also mirrored to other devices in a HA setup if it is configured in the persistence profile.

Match Across When configuring your persistence profile, you have the ability to make the persistence match across multiple objects such as Services, Virtual Servers and Pools. This feature is available for the following persistence methods: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Cookie Hash (other methods don’t generate persistence records on the device) Destination Address Hash Microsoft Remote Desktop SIP Source Address Affinity (aka Simple) SSL Universal

We’ll discuss all three concepts in the following sections.

Match Across Services The Match Across Services option enables you to send traffic to the same node for all virtual servers that share the same IP address. In order to enable this feature, you should create a new persistence profile and select the Match Across Services option.

276 276


Then assign this persistence profile to all virtual servers that you wish to persist to the same node. Note that the virtual servers need to share the same IP address and share the same nodes (in their assigned pool).To illustrate an example, we have configured two virtual servers with the following configuration: ▪

Virtual Server – 10.10.1.150:80 a. Pool member: 172.16.1.20:80 b. Pool member: 172.16.1.21:80

Virtual Server – 10.10.1.150:443 a. Pool member: 172.16.1.20:443 b. Pool member: 172.16.1.21:443

1.

The client initiates a connection to the virtual server vs_http (10.10.1.150:80). Since there are no active persistence records, the BIG-IP system load balances the client to the server 172.16.1.21:80. When it does this the BIG-IP also creates a persistence record in the persistence table.

2.

When the client initiates a new request but to the virtual server vs_https (10.10.1.150:443), since Match Across Services is enabled, the BIG-IP system will use the previously created persistence record and establish a connection to the pool member 172.16.1.21:443.

277 277


This feature is commonly used with different web shops. Some developers chose to have the web shops as regular HTTP based websites while the user is browsing for different products. When the user has added all of the items to the shopping cart and is ready to proceed to the checkout, as soon as the user presses “Proceed to Checkout” the website switches over to HTTPS using a redirect. This is to protect the user’s personal information such as the credit card and shipping information. If these websites are accessed through different virtual servers, when the user is redirected to the HTTPS version of the site they would most likely end up on another node and lose all of the items in the shopping cart. If Match Across Services is enabled, then this will not be a problem.

Match Across Virtual Servers The Match Across Virtual Servers feature is very similar to Match Across Services but is not limited to virtual servers sharing the same IP address. Instead, it enables you to direct clients to the same node regardless of what virtual server they connect to. The virtual servers just need to share the same nodes (in their assigned pool). In our example, we have two virtual servers with the following configuration: 1.

Virtual Server – 10.10.10.10:80 a. Pool member: 172.16.1.10:80 b. Pool member: 172.16.1.20:80 c. Pool member: 172.16.1.30:80

2.

Virtual Server – 30.30.30.30:80 a. Pool member: 172.16.1.10:8081 b. Pool member: 172.16.1.20:8081 c. Pool member: 172.16.1.30:8081

The client sends its initial request to the virtual server 10.10.10.10:80. Since there are no active persistence records, the BIG-IP system load balances the request, which, in this case, ends up on pool member 172.16.1.30:80. The next client request is sent to the virtual server 30.30.30.30:80 and since the virtual servers share the same node, it will use the same persistence record that was created in the previous request. Therefore, the request will be sent to the pool member 172.16.1.30:8081.

Match Across Pools This one is a truly advanced option; unlike the other Match Across choices, this will result in an existing persistence record for the client, associated with any pool being used. The service, virtual server, pool assignment(s) and any iRules assigned to the virtual server will not be considered. The client will be persisted to the Pool Member detailed in the matched persistence record even if it is not in a pool assigned to the virtual server. As you can imagine, there are many risks involved with using this option, from iRule logic being ignored to the service used to connect not matching what the pool member offers. The requirement(s) for using this feature can only ever be an ‘edge’ case and rarely encountered. F5 Documentation is light and unclear and caution and heavy testing are advised.

278 278


Persistence Mirroring We’ll discuss High Availability later in this book, but it is worth mentioning something about persistence mirroring. Persistence Mirroring allows multiple BIG-IP systems to share their persistence table with each other when configured for high availability. This increases reliability when the primary member is experiencing issues and fails over to the standby member, as the persistence records are also present on the standby member. Persistence Mirroring is only available when the BIG-IP systems are configured in a Sync-Failover device group.

Lab Exercises: Persistence Exercise 6.1 – Source Address Affinity Persistence Exercise Summary In this exercise, we’ll create and assign a source address affinity persistence profile to a virtual server and observe its behaviour. The source address persistence profile should ensure that additional requests are directed to the same pool member as the initial request. In this lab, we’ll perform the following: ▪ ▪ ▪

Create a Source Address Affinity Persistence Profile. Assign the Source Address Affinity Persistence Profile to a virtual server. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following; Network access to the BIG-IP system’s management port. Two or more servers configured on the internal network that can be load balanced to. This should already have been configured during the Building a Test Lab chapter. Created the pool http_pool with at least two members. Created the virtual server vs_http with the http_pool configured as the default pool.

▪ ▪ ▪ ▪

Enabling Persistence Records Statistics in WebGUI By default, you are not able to view Persistence Records from the WebGUI. However, this can be enabled using tmsh. This needs to be enabled before you start the exercise. Therefore, please use the following instructions: 1. 2. 3.

Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. Log on using the account root and the password f5training. Enable the Viewing of Persistence Records in the WebGUI by entering the following command:

[root@bigip1:Active:Standalone] config # tmsh modify /sys db ui.statistics.modulestatistics.localtraffic.persistencerecords value true 4.

Save the configuration by entering the following command:

279 279


[root@bigip1:Active:Standalone] config # tmsh save /sys config Verifying Current Behaviour (Without Persistence) 1. 2. 3. 4. 5.

Open up a browser session to https://192.168.1.245/ and login using the admin credentials. Navigate to Local Traffic > Pools > http_pool and click on the Members tab. Verify that the Load Balance Method is set to Round Robin. Navigate to Statistics > Module Statistics > Local Traffic and select the Statistics Type: Pools and reset the statistics for http_pool. Open up a browser session to http://10.10.1.100/ and refresh the page 5-10 times using Ctrl+F5. Go back to the statistics page on the BIG-IP system and click Refresh. What are the results? You should see an even amount of connections being distributed between all pool members. You probably noticed this during the browser session as well. In other words, you should not be persisting to the same pool member.

Configuring Source Address Affinity Persistence 1. 2.

Navigate to Local Traffic > Profiles > Persistence and in the upper right corner click on Create. On the Local Traffic > Profiles > Persistence > New Persistence Profile‌ page, add the following configuration:

Local Traffic > Profiles > Persistence > New Persistence Profile‌ General Properties Name Custom_Src_Persist Persistence Type Source Address Affinity Parent Profile source_addr Configuration Timeout Check the custom box and specify a timeout of 30 seconds. When done, click Finished 3.

Navigate to Local Traffic > Virtual Servers > vs_http > Resources tab and select Custom_Src_Persist under Default Persistence Profile. When done click Update.

Verifying the Configuration 1. 2. 3. 4.

Head back to the statistics page and reset the statistics for http_pool. Again, open up a browser session to http://10.10.1.100/ and refresh the page 5-10 times using Ctrl+F5. What pool member are you connected to? Are you persisting your connection? Go back to the statistics page again and click Refresh. What are the results? Each time a persistence record is created it is stored in the Persistence Table which you can actually view on the system. Navigate to Statistics > Module Statistics > Local Traffic and select Statistics Type: Persistence Records. Can you find your entry in the list? If not, then the timeout has been reached causing the record to be deleted. Refresh the browser session towards http://10.10.1.100/ and check the persistence table once again. You should now find an entry in the persistence table.

280 280


Expected Results When you enable Source Address Affinity Persistence, after the BIG-IP system has selected a pool member, it will create a persistence record pointing towards that pool member. Each time the client sends a new request (for instance, when the user refreshes the page), the client will be directed to the same pool member and the timeout value of the persistence record will be reset to 0 and begin to count up again. In our lab, we specified a timeout of 30 seconds which means that your persistence record could have timed out before you were able to verify that it was actually created. Just perform another refresh and view the persistence table again.

Clean-Up 1.

Navigate to Local Traffic > Virtual Servers > vs_http > Resources tab and make sure the Default Persistence Profile is set to None.

Exercise 6.2 – Cookie Persistence Exercise Summary In this exercise, we’ll create and assign a cookie persistence profile to a virtual server and observe its behaviour. The cookie persistence profile should ensure that additional requests are directed to the same pool member as the initial request with the help of a browser cookie. In this lab, we’ll perform the following: ▪ ▪ ▪

Create a Cookie Persistence Profile. Assign the Cookie Persistence Profile to a virtual server. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Two or more servers configured on the internal network that can be load balanced to. This should already have been configured during the Building a Test Lab chapter. Created the pool http_pool with at least two members. Created the virtual server vs_http with http_pool configured as the default pool. Ensure that the system time of the client PC and the BIG-IP system are synchronised.

▪ ▪ ▪ ▪ ▪

Viewing Browser Cookies For the following lab, we’ll need to view the cookies in our browser. The following instructions explain how to do so in the most popular browsers; Mozilla® Firefox™ 1. 2.

Open Mozilla Firefox. Click the Menu button in the upper right corner, choose Preferences > Privacy and then Remove Individual Cookies.

Google Chrome™ 1. 2.

Open Google Chrome In the top right, click the Menu button.

281 281


3. 4. 5.

Click Settings and then Show advanced settings. In the Privacy section, click Content settings. Under Cookies, click All cookies and site data. Here you can view and delete individual cookies.

Internet Explorer 11™ 1. 2. 3. 4. 5.

Open Internet Explorer. In Internet Explorer, select the Tools button and then Internet options. On the General tab, under Browsing history click on Settings. In the Website Data Settings dialog under the Temporary Internet Files tab click on View Files. This will open up a folder containing all of the temporary files that Internet Explorer has stored including cookies.

Microsoft Edge™ 1. 2. 3. 4. 5.

Open Microsoft Edge. In the upper right corner, click on the More button Click on F12 Developer Tools. This will launch the developer tab. Click on the Debugger tab. In the list, click on Cookies. Here you can view all cookies that the browser has received.

Clearing Browser History Mozilla Firefox 1. 2. 3. 4. 5. 6.

Open Mozilla Firefox. Click the Menu button in the upper right corner, choose History and then Clear Recent History…. Set Time range to clear to Everything. Click on the arrow next to Details to expand the list of history items. Select Cookies and make sure that other items you want to keep are not selected. Click Clear Now to clear the cookies and close the Clear Recent History window.

Google Chrome 1. 2. 3. 4. 5. 6. 7.

Open Google Chrome. In the top right, click the Menu button. Click Settings and then Show advanced settings. In the Privacy section, click Content settings. Under Cookies, click All cookies and site data. To delete all cookies, click Remove all. To delete a specific cookie, hover over a site, then click the X that appears to the right.

Internet Explorer 11 1. 2. 3.

Open Internet Explorer. In Internet Explorer, click on the Tools button and go to Safety > Delete browsing history. Select the Cookies and website data check box, and then select Delete.

282 282


Microsoft Edge 1. 2. 3. 4.

Open Microsoft Edge. To view your browsing history, select Hub > History. Select Clear all history. Choose Cookies and saved website data, then press Clear.

Ensuring System Time is Synchronised Between Client PC and BIG-IP System 1. 2.

Open up a browser session to https://192.168.1.245/ and login using the admin credentials. In the top banner which states the hostname and IP address, you’ll also find the date and time shown. Is the time and date the same as your PC’s? If not, then proceed with the following instructions. We configured the time zone when we performed the Initial Setup, but we did not configure NTP or specify the date and time.

Option 1: Setting the Time and Date on the BIG-IP system 1. 2. 3. 4.

Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. Log on using the account root and the password f5training. When logged in you should be in the bash shell specified by the prompt: config #. Change the operating system time using the following syntax:

[root@bigip1:Active:Standalone] config # date [month][day][hour][minute][year].[second] For example, to set the time to 2:00 PM (14:00) January 1, 2016 you would type the following: [root@bigip1:Active:Standalone] config #

date 010114002016.00

The time needs to be set in 24-hour method (military time).

5.

Verify that the time has changed by looking at the top banner in the WebGUI. Has it changed to the correct time? Is the time now synchronised with the Client PC?

Option 2: Setting the Time and Date on the Client PC 1. 2. 3. 4.

Verify the system date and time on the BIG-IP system by logging on to the WebGUI and checking the top banner. Change the system date/time on your Client PC by going to the Start Menu > System Tools > Time and Date. Click on Unlock and enter the password f5training. Adjust the time to match the BIG-IP system. When done, click Close.

283 283


Verifying Current Behaviour (Without Persistence) 1. 2. 3.

Navigate to Statistics > Module Statistics > Local Traffic and select the Statistics Type: Pools and reset the statistics for http_pool. Open up a browser session to http://10.10.1.100/ and refresh the page 5-10 times using Ctrl+F5. Go back to the statistics page on the BIG-IP system and click Refresh. What are the results? You should see an even amount of connections being distributed between all pool members. You probably noticed this during the browser session as well. In other words, you should not be persisting to the same pool member.

Configuring Cookie Persistence 1. 2.

Navigate to Local Traffic > Profiles > Persistence and in the upper right corner press Create. On the Local Traffic > Profiles > Persistence > New Persistence Profile‌ page, add the following configuration:

Local Traffic > Profiles > Persistence > New Persistence Profile‌ General Properties Name Custom_Cookie_Persist Persistence Type Cookie Parent Profile cookie HTTPOnly Attribute Disabled When done, click Finished 3. 4.

Navigate to Local Traffic > Virtual Servers > vs_http > Resources tab and select the Custom_Cookie_Persist profile as the Default Persistence Profile. When done, click Update. If all of the proceeding exercise has been done correctly, you should now see an error stating the following:

_01070309:3: Cookie persistence requires an HTTP or FastHTTP profile to be associated with the virtual server._ 5.

As we covered in the Profiles chapter, certain profiles are dependent on other profiles. The Cookie Persistence Profile is dependent on the HTTP profile and we need to have this assigned to the virtual server. Go back to the Properties tab and assign an HTTP Profile to vs_http. When done, click Update.

6.

Go back to the Resources tab and assign the Custom_Cookie_Persist profile as the Default Persistence Profile again. When done, click Update.

Verifying the Configuration 1. 2. 3. 4.

Head back to the statistics page and reset the statistics for http_pool. Again, open up a browser session to http://10.10.1.100/ and refresh the page 5-10 times using Ctrl+F5. What pool member are you connected to? Are you persisting to the same pool member? Go back to the statistics page again and click Refresh. What are the results? Are you persisting to the same pool member? On the statistics page, select Statistics Type: Persistence Records. Can you find your entry in the list? Why not?

284 284


5.

When accessing http://10.10.1.100/ you should also have a Display Cookie link. Click on it, what is the name and the value of the cookie? View the cookie in your browser. What is the expiration time? The cookie is called BIGipServerhttp_pool and has an encoded value which holds the pool member IP address and corresponding port. You can view this cookie in your browser and how you do that is dependent on the browser you are currently using.

Exercise 6.3 – Match Across Services Exercise Summary In this exercise, we’ll experiment with the Match Across Services feature and utilise the same persistence record for two different services. In our case, http and https. In this lab, we’ll perform the following; ▪ ▪ ▪

Modify the Custom_Src_Persist persistence profile. Assign the Custom_Src_Persist Profile to both vs_http and vs_https. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following; Network access to the BIG-IP system’s management port. Two or more servers configured on the internal network that can be load balanced to. These should already have been configured during the Building a Test Lab chapter. Created pools http_pool and https_pool each containing at least two members. Created the virtual server vs_http with http_pool configured as the default pool. Created the virtual server vs_https with https_pool configured as the default pool. Created the Persistence Profile Custom_Src_Persist from the earlier exercises.

▪ ▪ ▪ ▪ ▪ ▪

Verifying Current Behaviour 1. 2. 3. 4. 5. 6. 7. 8.

Make sure that the load balancing method for both http_pool and https_pool is set to Round Robin and that Priority Group Activation is Disabled. Configure both vs_http and vs_https to use Custom_Src_Persist as their Default Persistence Profile. Navigate to Local Traffic > Profiles > Persistence > Custom_Src_Persist and change the Timeout value to 60. Click Update to save the configuration. Open a browser session towards both http://10.10.1.100/ and https://10.10.1.100/ and refresh the pages a couple of times in order to make sure you are persisting both connections. Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. Log on using the account root and the password f5training. Verify that you indeed have two persistence records by typing the following command:

[root@bigip1:Active:Standalone] config # tmsh show /ltm persistence persist-records all-properties

285 285


Enabling the Match Across Services Feature 1.

Navigate to Local Traffic > Profiles > Persistence > Custom_Src_Persist and add the following configuration:

Local Traffic > Profiles > Persistence > Custom_Src_Persist General Properties Match Across Services Check the Custom box and check the Match Across Services checkbox. When done, click Update 2.

3.

Go back to the browser sessions towards http://10.10.1.100/ and http://10.10.1.100/ and press Ctrl+F5 to hard refresh the pages. Do this 5-10 times to make sure you are persisting the connection to the same pool member. Notice now that you are using the same pool member on both the http://10.10.1.100/ and https://10.10.1.100/ virtual servers. Verify this by again checking the persistence records in the terminal session by entering the following command:

[root@bigip1:Active:Standalone] config # tmsh show /ltm persistence persist-records all-properties If you still see two persistence records, this could mean that the persistence records did not time out before you refreshed the page. The timeout should be 60 seconds. After you have refreshed the page wait at least 60 seconds before going through step 2-3.

Expected Results During the first attempt, when you access http://10.10.1.100/ and https://10.10.1.100/, you should connect to two different pool members. We verified this in a terminal session and have seen that two persistence records are created. There is a slight chance that you might connect to the same pool member but that is caused by the load balancing method and not persistence - it won’t happen consistently. This is because at this point, the persistence profile does not utilise the Match Across Services feature, meaning it will create a unique persistence record for each service (HTTP and HTTPS). In our second attempt, we turn on the Match Across Services feature which means that both the HTTP and HTTPS virtual servers use the same persistence record. This will cause the BIG-IP system to create a single persistence record that both services can use. We verified this in our terminal session that now shows that there is only one persistence record.

Clean-Up 1.

Set the Default Persistence Profile on both vs_http and vs_https to None.

286 286


Chapter Summary ▪

An Application Session is the communication channel between two hosts, used to exchange information and complete transactions of some kind. It is comprised of one or more underlying TCP connections between the client and server (virtual or otherwise). A session is typically stateful, with various parameters and variables (including unique IDs and authentication information) assigned and valid only for the session in question.

Persistence, also known as stickiness, affinity or session persistence, is used to direct additional connections from a client to a virtual server to the same real server as the initial (first) connection. This ensures that any state information relevant to that client and held on only that server will be available.

Persistence only applies after the first load balancing decision is made.

The cookie persistence method is used for the HTTP protocol and uses a cookie that is stored on a client’s computer and sent with every request to identify to which pool member it has previously connected.

With all Cookie Persistence methods except Cookie Hash, the IP address and Service Port of the Pool Member are encoded and stored in the Cookie value.

Hash Persistence uses an iRule or in-profile hash parameters to find session-unique (and repeating) data at a specific location in a request/response header or its payload and persists based on that value.

In case the configured persistence method fails for some reason, it is possible to assign a fallback persistence method. The available fallback methods are limited to Source Address and Destination Address.

When configuring your persistence profile, you have the ability to make the persistence match across multiple entities such as Services, Virtual Servers and Pools.

Chapter Review 1. Which of the following persistence methods uses the client’s source address in order to persist connections? a. b. c. d.

Cookie Persistence. SSL Persistence. RDP Persistence. Source Address Persistence.

287 287


2. Which of the following persistence methods requires an HTTP profile? a. b. c. d.

Cookie Persistence. SSL Persistence. RDP Persistence. Source Address Persistence.

3. You are the BIG-IP administrator and you notice that there is an uneven load on your pool members where pool member 172.16.0.1 is receiving a lot more connections than pool member 172.16.0.2. You have configured the virtual server to use Source Address Persistence. When reviewing the persistence table, you notice that the amount of persistence records for both pool members are about the same. Based on this scenario, what is the most probable cause of the uneven load? a. b. c. d.

The virtual server is configured as a Performance Layer 4 type. The clients currently being persisted to pool member 172.16.0.1 are being routed through a proxy or NAT device. The pool member 172.16.0.2 has a shorter Persistence Record Timeout. The pool member 172.16.0.1 has a shorter Persistence Record Timeout.

4. What statement is true regarding Source Address Persistence? a. b. c. d.

It is application independent. It works very efficiently with clients coming from proxy or NAT devices. It operates on layer 7. It works very efficiently with users that regularly change their IP address.

5. You are the BIG-IP administrator and you have created a Custom Cookie Persistence Profile with an expiration time configured. Persistence is working as it should but after a while, clients are being load balanced to a different pool member. What setting should you configure in order to solve the problem? a. b. c. d.

Disable the HTTPOnly Attribute. Modify the Cookie Method. Modify the Mask value. Enable Always Send Cookie.

6. Which Cookie Method is used when the end-server creates and inserts the cookie into the HTTP response? a. b. c. d.

Cookie Passive Cookie Insert Cookie Hash Cookie Rewrite

288 288


7. Which Cookie Method uses the persistence table to persist connections? a. b. c. d.

Cookie Passive Cookie Insert Cookie Hash Cookie Rewrite

289 289


Chapter Review: Answers 1. Which of the following persistence methods uses the client’s source address in order to persist connections? a. b. c. d.

Cookie Persistence SSL Persistence RDP Persistence Source Address Persistence

The correct answer is: d 2. Which of the following persistence methods requires an HTTP profile? a. b. c. d.

Cookie Persistence SSL Persistence RDP Persistence Source Address Persistence

The correct answer is: a 3. You are the BIG-IP administrator and you notice that there is an uneven load on your pool members where pool member 172.16.0.1 is receving a lot more connections than pool member 172.16.0.2. You have configured the virtual server to use Source Address Persistence. When reviewing the persistence table you notice that the amount of persistence records for both pool members are about the same. Based on this scenario, what is the most probable cause of the uneven load? a. b. c. d.

The virtual server is configured as a Performance Layer 4 type. The clients currently being persisted to pool member 172.16.0.1 are being routed through a proxy or NAT device. The pool member 172.16.0.2 has a shorter Persistence Record Timeout. The pool member 172.16.0.1 has a shorter Persistence Record Timeout.

The correct answer is: b If clients connect to the virtual server through a proxy or device that NATs traffic, then all of the requests will appear to come from the same source IP address. This creates a single persistence record for several clients resulting in uneven load across a pool of real servers.

290 290


4. What statement is true regarding Source Address Persistence? a. b. c. d.

It is application independent. It works very efficiently with clients coming from proxy or NAT devices. It operates on layer 7. It works very efficiently with users that regularly change their IP address.

The correct answer is: a Source Address Persistence works at layer 3, which means it is not dependent on an application level protocol or data in order to create persistence records, as is the case for cookie persistence, for example. 5. You are the BIG-IP administrator and you have created a Custom Cookie Persistence Profile with an expiration time configured. Persistence is working as it should but after a while, clients are being load balanced to a different pool member. What setting should you configure in order to solve the problem? a. b. c. d.

Disable the HTTPOnly Attribute. Modify the Cookie Method. Modify the Mask value. Enable the Always Send Cookie.

The correct answer is: d If an Expiration is configured for the cookie and you want the expiration time to be updated with each client request, then the Always Send Cookie setting should be enabled. This means that for every new request the client makes, a new cookie is sent with an updated expiration value. If the Always Send Cookie value were disabled, then the client would have a limited time window in which persistence (to that pool member) is available. Once the cookie expires, the client would most likely end up on a new pool member and receive a new cookie with a new expiration time. 6. Which Cookie Method is used when the end-server creates and inserts the cookie into the HTTP response? a. b. c. d.

Cookie Passive Cookie Insert Cookie Hash Cookie Rewrite

The correct answer is: a The Cookie Passive Method is exactly what the name implies. This method will not insert, search or modify the SetCookie headers. Instead any cookie will be allowed to pass through the BIG-IP system unaltered. All of the responsibility is transferred to the end-server to provide preformatted cookies with the pool and the corresponding server address and port. The BIG-IP system then forwards this cookie to the client unaltered.

291 291


7. Which Cookie Method uses the persistence table to persist connections? a. b. c. d.

Cookie Passive Cookie Insert Cookie Hash Cookie Rewrite

The correct answer is: c

292 292


10. SSL Traffic SSL was first developed by Netscape Communications Corporation back in 1994 in order to secure transactions over the World Wide Web (WWW), what most of us now consider the Internet. Version 1 of SSL was never publicly released as it had serious security flaws. Version 2.0, on the other hand was released in February 1995, but it was also discovered to have a number of security flaws which led to the development of version 3.0 which was released in 1996. After SSL versions 1-3, the Internet Engineering Task Force (IETF) started work on creating an open standard that provided the same functionality as SSL and used SSL version 3.0 as the basis for Transport Layer Security (TLS). TLS 1.2 is currently the most up-to-date version but as of this book’s writing, version 1.3 is in the works. It will include increased security and new features, and we can expect further ongoing updates. TLS and SSL are widely recognised as the protocols which secure HTTP for performing secure transactions (transforming it into HTTPS). However, TLS and SSL can also be used for securing other application level protocols such as File Transport Protocol (FTP), Lightweight Directory Access Protocol (LDAP) and Simple Mail Transfer Protocol (SMTP) to mention a few. As discussed, SSL has been superseded by TLS and should no longer be used because it contains many security flaws. Many companies and services have already transitioned to TLS. Even though TLS is the most popular protocol used for HTTPS and secure transactions, most still refer to the encryption protocol as SSL out of habit or because they are unaware of recent developments. In the following sections, we also use the same terminology and refer all secure transaction technologies as SSL. Web site security and privacy are growing concerns which are rapidly increasing in importance. More and more intra and Internet sites are switching from unencrypted HTTP to TLS encrypted HTTPS to address this, but as they do so other concerns arise. First off, since it is encrypted, we can no longer identify if traffic being sent through to the server is malicious. A malicious user can cloak their attack in an SSL connection and it will pass through firewalls, IPSs and other security devices. Therefore, we are forced to decrypt the SSL traffic in order to review the packet’s content and determine if it is harmful to our systems. If the business has a high security standard, most companies then re-encrypt the traffic before sending it off to the server. Another concern is the performance hit suffered by servers when they are required to use SSL. It is highly probable that when you enable SSL encryption and decryption on a server, it will suffer a significant performance decrease. In order to counter this, server administrators can install SSL accelerator cards on their servers so that encryption and decryption are handled in a hardware module instead of in software. However, these cards can be really expensive and if you are using multiple servers to manage one site, you will have to buy a lot of SSL accelerators. Intel Core CPUs have, since 2010, included the Intel® Advanced Encryption Standard (Intel AES) Instructions Set which mitigates some of the performance overheads of SSL/TLS use on the server. Each BIG-IP appliance (i.e. not Virtual Editions) comes with an SSL accelerator card that handles SSL encryption and decryption. This allows the BIG-IP system to perform both SSL key exchange and bulk crypto work using its hardware components rather than the software components which is far faster.

293 293


Every device also comes with a license that states how many SSL transactions per second (TPS) it will handle. The limit can be increased (as long as the hardware is powerful enough) through an additional licence. F5 recommends that you monitor SSL traffic volumes in order to allow the licence to be proactively managed. You can configure the BIG-IP system to send an email or an SNMP trap whenever a message is logged that states that the SSL TPS limit has been reached.

Terminology of SSL In order to understand some of the following sections it will be useful to discuss some of the terminology we’ll be using.

Certificate Authority (CA) Certificate Authorities (CA) are very important and have several functions including: ▪

Verifying the identity of the requester: Before the CA will issue a certificate, it should ensure the identity and authority (if acting on behalf of a company) of the requester.

Verifying ownership of the resource: The CA will also verify the ownership of the resource for which the certificate will be issued.

Issue certificates to the requester: When the CA administrator has validated the identity of the requester and perhaps their ownership of the resource to be secured, the next step is to issue the certificate. This could be for a user, a computer, network device or service certificate. It is important to choose the right purpose for the certificate’s use when you request it, as depending on what you specify, the certificate will have a different set of options. A domain name based HTTPS certificate is very different to a IPsec certificate.

Manage certificate revocation: The CA will also keep track of certificates that have been revoked. They can be revoked for several reasons which we’ll cover later. To allow others to identify revoked certificates the CA publicly publishes a frequently updated certificate revocation list (CRL).

Certificate Signing Request (CSR) A Certificate Signing Request (CSR) is a block of encoded text that is provided to a Certificate Authority (CA) when applying for an SSL Certificate. The CSR is usually generated on the device on which the certificate will be installed (but doesn’t have to be). SSL uses both a certificate and key, with the CSR being based on the key. When you generate a CSR on an F5 device, the key is silently generated first. The information included in the CSR will be used by the Certificate Authority to create the matching certificate which you can import to the device once it is issued. The CSR contains information which will appear in the final certificate, such as the organisation name, common name (domain name), locality, country and email address. The benefit of using a CSR is that the private key never leaves the device where it was created. If the private key were to end up in the hands of a malicious actor, you face the prospect of someone being able to decrypt your traffic. The complete process of generating a certificate and key pair using a CSR is as follows: 1.

Generating a CSR on the device - The CSR will contain all of the information necessary in order for a CA to issue the certificate. This will also create the matching key in the pair.

2.

Upload the CSR to a Certificate Authority - This can either be an Internal CA or a public one like VeriSign©. The CA will use the information in the CSR to create a matching certificate that you can download.

294 294


3.

Import the Certificate to the device - In order to have a certificate/key pair you will need to import the certificate generated by the CA. Since the key and certificate are mathematically connected, they will be matched together once it has been imported.

Personal Information Exchange Syntax #12 (PKCS#12) A PKCS #12 file is an archive file format used for storing one or more cryptographic objects as a single file. It is usually protected by a password and used for bundling together an SSL certificate and key pair. It can also be used to bundle multiple certificates used in the certificate chain. The PKCS #12 file uses the filename extensions .p12 or .pfx which is the reason some people may refer to one as a “PFX File”.

Managing SSL Certificates for the BIG-IP System Using the WebGUI Before we get into the fundamentals of SSL traffic configuration on a BIG-IP device, we must first consider the management of SSL certificates. The BIG-IP system lets you control the SSL traffic that flows in both directions (client-side and server-side) using what are called SSL profiles. Before we can configure our own SSL profiles, we must first import/create keys and certificates on the BIG-IP system. The SSL certificates can either be self-signed or created by a trusted Certificate Authority (CA). A self-signed certificate is created on the BIG-IP system and signed using its own private key. Self-signed certificates are not trusted by web browsers because they are signed by the BIG-IP device itself and not an authorised CA which would perform some level of identity validation and other checks before signing. To put it simply, a self-signed certificate is like creating your own driver’s licence at home and expecting it to be valid. A certificate generated and signed by a trusted CA will, however, be trusted by web browsers because it has been validated and signed by a trusted CA’s private key. These certificates are imported into the BIG-IP system and can then be applied where necessary using SSL profiles. F5 recommends that when you are renewing an SSL certificate from a trusted CA, the most secure and effective way of renewing it is to generate a new certificate signing request (CSR). All of the certificates and keys stored on the BIG-IP device can be found under System > File Management > SSL Certificate List. Even though the items present in the list are named SSL Certificates, these objects can contain different types of content such as Certificate Bundle, RSA Certificate, RSA Key or Certificate Signing Request. All of which can be used for the same SSL Profile.

295 295


Procedures Creating a Self-Signed SSL Certificate As mentioned previously, a self-signed SSL certificate is signed by the BIG-IP device’s own private key. These certificates can be used for both client and server-side SSL processing. However, the recommendation for client-side is to use an externally trusted CA. To generate a self-signed certificate please refer to the following instructions: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Log into the BIG-IP system using the WebGUI. Go to System > File Management > SSL Certificate List. Click Create. Type a name for the certificate. From the Issuer list, select Self. Configure the Common Name setting. This can be any name you’d like. Optionally configure any of the other settings as necessary. Under Key Properties, select the Key Type and the Size that you would like. Click Finished.

Creating a Certificate using a CSR The procedure below explains how you generate a new certificate signing request (CSR) which is then passed to a trusted CA: 1. 2. 3. 4. 5. 6.

Log into the BIG-IP system using the WebGUI. Go to System > File Management > SSL Certificate List. Click Create. Type a name for the certificate object. From the Issuer list, select Certificate Authority. Configure the Common Name setting. This is typically the fully qualified domain name (FQDN) for example www.domain.com which is embedded into the certificate. It is used for name-based authentication purposes. 7. Optionally configure any of the other settings as necessary. 8. Under Key Properties, select the Key Type and the Size that you would like. 9. Click Finished. 10. Now you can either download the CSR file by clicking Download [certificate name] or by copy/pasting the text from the small window into a new file.

296 296


11. If the certificate authority you are using is displayed in the Certificate Authorities list, click on it to go to their website. If it is not, you will have to go to their website manually and provide your CSR.

12. Once you are at the certificate authority site, follow their instructions. When you have generated the certificate you will need to import it to the BIG-IP device. 13. When done, click Finished. This will create the certificate object and its content will be the key and the CSR. It will look like the following:

The Name of the certificate is just used to name the certificate as an object in the BIG-IP device. It is however necessary that these names are unique. Whenever you are generating a new certificate or renewing an old one, it is crucial to use a unique naming standard. A standard I tend to use is [certificate name]_[date when generated]. That way it will most likely be unique. Some administrators use the naming standard [certificate name]_[date when expired]. 14. After you have generated the key and created the CSR you should still be on the SSL Certificate List page. On this page, click Import. 15. In the Certificate Source section, choose either Upload File and browse to the certificate or choose Paste Text and copy/paste the certificate into the Certificate Source window. Remember to include the entire certificate including the —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—– lines.

297 297


The private key looks very similar to the certificate except it starts its content with —–BEGIN PRIVATE KEY—– and ends with —–END PRIVATE KEY—–.

16. Click Import 17. After this, the contents of the previously created certificate object should change from RSA Key & Certificate Signing Request to RSA Certificate, Key & Certificate Signing Request.

Importing an SSL Certificate There are times where you need to import just a certificate. For example, when you only need to import a certificate as a chain certificate (which is covered later in this chapter) or sometimes you receive the certificate and key as separate files (not as a PKCS#12 file). In those cases, you will have to import the certificate and key separately. However, since they are mathematically connected they will be able to match once both the key and certificate are imported. In order to import a certificate, please use the following instructions: 1. 2. 3. 4. 5. 6. 7.

8.

Log into the BIG-IP system using the WebGUI. Go to System > File Management > SSL Certificate List. Click Import. From the Import Type list select Certificate. In the Certificate Name section select Create New. In the Certificate Name field, enter a name for the certificate. Remember that this has to be unique on the BIG-IP system. In the Certificate Source section choose either Upload File and browse to the certificate or choose Paste Text and copy/paste the certificate into the Certificate Source window. Remember to include the entire certificate including the —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—–. Click Import.

Importing an SSL Private Key If you are creating a new certificate key pair and you followed the previous instruction, you should now have an object in the SSL certificate list that only has a certificate and no key. The next step is to import the private key. To do so, please follow these instructions: 1. 2. 3. 4. 5.

Log into the BIG-IP system using the WebGUI. Go to System > File Management > SSL Certificate List. Click Import. From the Import Type list select Key. In the Key Name section, click Create New.

298 298


6. 7.

8.

In the Key Name field, type a unique name for the key. In the Key Source section, choose either Upload File and browse to the key file or choose Paste Text and copy/paste the key into the Key Source window. Remember to include the entire key including the —–BEGIN PRIVATE KEY—– and —–END PRIVATE KEY—–. Click Import.

Importing a PKCS#12 File Sometimes, you may be provided with a PKCS#12 file. To import a PKCS#12 file into the BIG-IP system, please use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8.

Log into the BIG-IP system using the WebGUI. Go to System > File Management > SSL Certificate List. Click Import. From the Import Type list select PKCS 12 (IIS). In the Certificate Name field, type a name for the certificate. In the Certificate Source field, click Choose File and browse to the PKCS12 file. Enter the password of the PKCS12 if one is set. Click Import

Renewing a SSL Certificate Using a CSR Whenever you need to renew a certificate from a CA that is about to expire it is strongly recommended that you generate a completely new CSR and a new private key thus creating a new certificate object. This is described in the Creating a Certificate using a CSR section. Some CAs allow you to renew a certificate using the existing CSR file, but this is considered less secure as it reuses the old private key. In order to renew a CA generated certificate (reusing the CSR), please use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8.

Log into the BIG-IP system using the WebGUI. Go to System > File Management > SSL Certificate List. Navigate through the list and find the desired certificate and click on it. Click Renew In the Issuer field, choose Certificate Authority Fill out the remaining fields if required. Click Finished On the Certificate Signing Request page, you are now able to download the CSR file by clicking Download [certificate name] or by copy/pasting the text from the small window. 9. If the certificate authority you are using is displayed in the Certificate Authorities list, click on it to go to their website. If it is not, you will have to go to their website manually and provide your CSR. 10. Once you are at the certificate authority site, follow their instructions. When you have generated the certificate, you will need to import it to the BIG-IP device. 11. When done, Finished. This will redirect you back to the certificate page. When you have downloaded the new certificate, you will have to import it to the BIG-IP system. But keep in mind, whenever you import a renewed SSL certificate, it will overwrite the existing certificate with the one you are importing to your BIG-IP system. The SSL profile will automatically start to use the new certificate for its SSL sessions.

299 299


Old connections will continue to use the old certificate until all active sessions are complete, renegotiated or if TMM is restarted. In order to import the renewed certificate, please refer to the following instructions: 1. 2. 3. 4. 5. 6. 7. 8.

Log into the BIG-IP system using the WebGUI. Go to System > File Management > SSL Certificate List. Click Import. From the Import Type list select Certificate. In the Certificate Name field, click on Overwrite Existing. In the Certificate Name field, enter the name of the certificate. In the Certificate Source section, click on either Upload File or Paste Text. Click Import When a certificate is about to expire, I usually create a completely new certificate and key pair for safety’s sake. This is because you generally get an expiration warning at least a month in advance. You have time to create a new certificate and key pair and assign this to the relevant SSL Profile. If there is something wrong with the certificate or key, you can just assign the old pair and you are up and running again as long as the old certificate has not expired yet.

SSL/TLS Offloading One of the great benefits of the BIG-IP system is its ability to perform SSL Offloading. Since the BIG-IP system has an SSL accelerator card, it is super-fast when it comes to encrypting and decrypting SSL traffic. Even in virtual edition the highly efficient and optimised native SSL software ‘stack’ used provides significant benefit. Before offloading was introduced, HTTPS solutions involved terminating the SSL on the web server itself, and some still do. SSL Offloading moves the SSL processing from the end-server out to the BIG-IP system instead. This improves the performance of the end-server(s) and it also helps with certificate management as you have only one place to keep track of all your certificates. SSL Offloading can also be referred to as Client-side SSL Termination as the SSL session is terminated on the BIG-IP system. This is a great benefit because if the SSL traffic were to just pass through to the end-server, we would not be able to view or manipulate any of its content because it would be encrypted. When the SSL session is terminated on the BIG-IP device, we can still have an encrypted session between the client and the BIG-IP device but also read and interpret the layer 7 traffic such as HTTP GET requests or read HTTP cookies. You also have the ability to run the traffic through ASM security policies or APM access policies. In summary, SSL Offloading has the following benefits: ▪

Avoid end-server SSL traffic processing overhead. This is instead moved to the BIG-IP system. If the servers do have an SSL accelerator card, the processing overhead is not relevant, however, their cost is still a factor since SSL accelerator cards are expensive.

300 300


â–Ş

Easier management of SSL certificates. Only a single certificate is needed for one pool of end-servers (no matter its size) and that means that you will only have to install and manage certificates on the BIG-IP system.

â–Ş

Being able to view and manipulate the traffic at the application protocol level.

The following diagram displays how traffic flows through the BIG-IP device when SSL Offloading is used:

301 301


The Client SSL Profile In order to activate SSL Offloading, you must configure a Client SSL Profile. The Client SSL Profile is assigned both the certificate and key used to encrypt and decrypt traffic. The profile itself is then assigned to a virtual server. In order to assign a certificate and key to a Client SSL Profile, they must be created and/or imported into the BIG-IP system by first using the methods described earlier. Once they are present, we need to create a Custom Client SSL Profile. A significant part of the configuration during this process is the Certificate Key Chain setting. Since the Custom Client SSL Profile will inherit its settings from the Default Client SSL Profile, we need to break the inheritance by enabling the Custom setting specified in the following picture:

Once you enable this setting in the custom profile, you will be able to specify the certificate and key that you would like to use. It is also possible to specify the chain certificate as well. When the Custom Client SSL Profile has been created it, can be applied to the virtual server and used to perform SSL Offloading, terminating the SSL session on the BIG-IP device.

Note that when you choose a certificate and key it is important that the certificate matches the FQDN that is used to resolve the virtual server’s IP address via DNS. Otherwise the client’s web browser will produce an error stating that the domain name of the site and the certificate’s common name (or one of its subject alternative names(SANs)) do not match and that the website is not secure.

302 302


Creating a Custom Client SSL Profile In order to create a Custom Client SSL Profile; 1. 2. 3. 4. 5. 6.

Log into the BIG-IP system using the WebGUI. Go to Local Traffic > Profiles > SSL > Client Click Create. In the Name field, enter the name of the profile. In the Parent Profile, choose clientssl. Break the inheritance from the Default Client SSL Profile by clicking the custom box on the far right in the Certificate Key Chain section. 7. In the Certificate field, select the certificate you would like to use. 8. In the Key field, select the key you would like to use. 9. Once both Certificate and Key are selected, Click Add. 10. Scroll down to the bottom of the page and click Finished.

SSL Bridging Some businesses and organisations have high security requirements and therefore cannot send any unencrypted traffic to their real servers, it must be encrypted end to end. Using only a Client SSL Profile, the traffic that is sent to the real servers is not encrypted and this is unacceptable. In order to solve this problem, the BIG-IP system can re-encrypt the decrypted inbound client-side traffic before sending it to the end-servers, thus acting as an SSL client. F5 have previously called this technology SSL Termination. In order to activate SSL Bridging, a Server SSL Profile will have to be created and assigned to the virtual server together with the Client SSL Profile.

This causes the BIG-IP system to establish another SSL session with the end-server before sending the sensitive data. Note that in order for the servers to accept the SSL session, they will need to have the correct service configured in their corresponding pool. For instance, if you are load balancing HTTPS traffic you cannot have a pool of HTTP servers because they will not understand the SSL protocol. SSL Bridging enables the BIG-IP system to view and interpret the data that the client is sending in order to add security and authentication, and to manipulate the traffic if necessary, while maintaining high security. Since the BIG-IP device uses a full proxy architecture, it is also possible to use a certificate and key with a higher key length on the Client SSL profile and a certificate and key with a lower key length on the Server SSL profile.

303 303


The following diagram displays how traffic flows through the BIG-IP device when SSL Bridging is used:

Creating a Custom Server SSL Profile In order to create a Custom Server SSL Profile, please follow the instructions below: 1. 2. 3. 4. 5.

Log into the BIG-IP system using the WebGUI. Go to Local Traffic > Profiles > SSL > Server Click Create. In the Name field, enter the name of the profile. In the Parent Profile, choose serverssl.

304 304


6. 7. 8. 9. 10.

Activate the box on the far right in the Certificate Key Chain section. In the Certificate field, select the certificate you would like to use. (Optional) In the Key field, select the key you would like to use. (Optional) Once both Certificate and Key are selected, Click Add. Scroll down to the bottom of the page and click Finished. When you configure your virtual server with a Server SSL Profile, the BIG-IP will act as an SSL client towards the back-end servers. Certificate and Key settings is therefore optional on the Server SSL Profile, and per default set to None. However, if the back-end servers requires a certain SSL client certificate to be presented on behalf of the client, import the certificate and key to the BIG-IP and select it under the Certificate and Key fields.

SSL Passthrough Another scenario that you will most likely run into as well is the need for SSL Passthrough. As the name implies, this approach will simply let the SSL session pass through the BIG-IP device as is. In order to configure this, you create a HTTPS virtual server, but you do not add a Client or Server SSL Profile. The SSL session will be entirely managed by the end-server. As with SSL Bridging, in order for the servers to respond to the SSL session they will need to have the correct service configured in their corresponding pools. Assigning an HTTP pool to an SSL Passthrough virtual server will never work since the pool members will not understand the SSL protocol. This scenario is not that common, as you normally want the BIG-IP system to offload the performance hit that the servers experience when handling SSL. However, some applications require that the SSL session is directly terminated on the server itself. The downside of using this method is that you do not get any visibility into the application traffic, meaning the BIG-IP system is unable to interpret and manipulate it. For instance, you cannot use Cookie Persistence for this type of virtual server since you cannot read the cookie within of the HTTP request as it is encrypted. The following diagram displays how traffic flows through the BIG-IP device when SSL Passthrough is used:

305 305


Certificate Authorities A Certificate Authority (CA) serves as a trusted 3rd party used by both servers and clients. As we mentioned earlier, a CA issues certificates but also revokes expired and invalid ones. The servers have the certificates they provide (to clients) verified and signed by the CA, and clients verify the validity of these certificates using the CA’s root and intermediate certificates. These are typically present in operating systems and/or installed in the client browsers although the server will often provide them too. For internal environments, it is possible to create and use your own CA, but this can result in a considerable management burden because the root and intermediate certificates have to be distributed to all of the clients in order for them to be trusted by the browser(s) in use. Regardless, this is normally the preferred option as it is cheaper than using a commercial CA and considered by some to be more secure. Despite the cost, commercial CA certificates must usually be used for public, Internet based clients because it is much harder (if not impossible) to distribute the intermediate and root certificates to these clients.

306 306


Intermediate CAs and the Certificate Chain Most common internal or external CA Public Key Infrastructures (PKIs) have a single root CA and associated certificate and one or more intermediate CAs (also known as leaf or subordinate CAs) and associated certificate(s). The root CA is normally used only to create intermediate CAs and it’s private key is the most highly secured and valuable of any. In turn, the intermediate CA(s) purpose is to create certificates for systems, servers and users. In each certificate; root, intermediate and all others, there is an Issuer field. This is the name of the CA that has signed the certificate (as well as an associated Authority Key Identifier) which is used to identify the CA that signed the certificate. Here’s an example:

These details allow a client, such as a browser, to create a so-called hierarchy (or chain) of trust for any certificate it is required to verify. If a server or web site’s certificate isn’t directly trusted (which will typically be the case), the issuer field will be used to determine if the certificate of the issuer (and signer) of the presented certificate is trusted. If not, the issuer of that certificate is checked and so on. You’ll note that root certificates are self-signed. This is illustrated in the diagram below:

For example, when a client opens a HTTPS connection to www.domain.com and receives its certificate, it checks its trusted certificate store and does not find a match. The details within the certificate include the issuer, which in this case is: Secure IT Authority CA.

307 307


The client does not trust this CA certificate either, and must therefore confirm that it trusts its signer (issuer): Secure IT Root CA. The client’s web browser does find this certificate in its trusted certificate store and will thus trust any certificates signed by it and its intermediates. As soon as the client encounters a certificate or CA certificate that is trusted, the validation process stops. So, if we were to import the Secure IT Authority CA certificate into the client’s trusted store the validation process would not have to traverse the chain to the Secure IT Root CA. A client host will get most of its root and intermediate certificates when its operating system is installed, and new ones will be pushed out (and old or invalid/insecure ones removed) via operating system updates. Some browsers and other applications will also install their own certificates, or use their own certificate trust stores.

Importing Certificates & Constructing the Certificate Chain in the BIG-IP System When you receive a certificate signed by a commercial CA, you will most likely receive both the signing intermediate certificate(s) and the signing root certificate. You can then provide these to the client to help it build the hierarchy of trust. This is useful because a client may trust the root CA but not the issuing intermediate CA (or CAs if multiple is used) because it or they are not installed in the trusted certificate store. There’s actually no point in providing the root certificate because if it is not present in the client’s trusted certificate store, the hierarchy of trust cannot be completed/resolved, and the server or web site’s certificate will be considered untrusted and thus rejected. When configuring SSL profiles, you can choose to add a certificate Chain which provides the intermediate CA certificate(s). This means that when the BIG-IP system sends the certificate to the client. It will also send the chain certificates to help complete the hierarchy of trust, if required.

The client does not have to use any intermediate certificates you provide to it to verify the server or web site’s certificate - you are simply providing an indication of how it might do so.

Importing the CA Certificates There are multiple ways of importing the intermediate and root certificates into the BIG-IP system.

308 308


You are not required to add the root CA Certificate in the certificate bundle because if the client does not trust the root CA, then the validation will fail anyway. Adding the root CA certificate to the bundle is merely done as a courtesy to the client. You would usually combine all of the required chain certificates into a so-called bundle (a single file) when importing them into the BIG-IP system. This process is described next: 1. 2. 3. 4. 5. 6. 7.

8.

Log into the BIG-IP system using the WebGUI. Go to System > File Management > SSL Certificate List. Click Import. In the Import Type field, choose Certificate. Make sure Create New is selected. Enter a name for the certificate or bundle. There are now two options you can choose. a. Open up each .CER certificate file and copy its contents into a single file, save it and use the Upload File option. Then upload the file that contains all of the certificates in the chain. b. Choose Paste Text and then copy/paste all of the certificates into the small window. When you are done, click Import.

309 309


When you are done, the result should look like this:

Creating the Client SSL Profile With a Certificate Chain We previously described how to create a ClientSSL Profile but here we’ll also assign the previously imported Certificate Chain bundle. In order to create a Custom ClientSSL Profile with a Certificate Chain: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Log into the BIG-IP system using the WebGUI. Go to Local Traffic > Profiles > SSL > Client. Click Create. In the Name field, enter the name of the profile. In the Parent Profile, choose clientssl. Activate the box on the far right in the Certificate Key Chain section. In the Certificate field, select the certificate you would like to use. In the Key field, select the key you would like to use. In the Chain field, choose the certificate bundle you have created. Once the Certificate, Key and Chain are all selected, click Add.

Lab Exercises: SSL Traffic Exercise 7.1 – Configuring SSL Offload Exercise Summary In this exercise, we’ll experiment with SSL Offloading. We’ll create an SSL Client Profile and assign to a virtual server and observe its behaviour. This will cause the BIG-IP system to establish an encrypted SSL communication between itself and the client while the traffic between the BIG-IP system and the pool member will remain clear-text. In this lab, we’ll perform the following: ▪ ▪ ▪ ▪

Create/Generate a Self-Signed Certificate. Create a custom Client SSL Profile. Create a new HTTPS virtual server and assign the Client SSL Profile. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: ▪ ▪ ▪

Network access to the BIG-IP system’s management port. Two or more servers configured on the internal network that can be load balanced to. This should already be configured during the Building a Test Lab chapter. Created the pools http_pool and https_pool each containing at least two members.

310 310


Generating a Self-Signed Certificate In a real-life environment, you will ideally never use a self-signed certificate in a client SSL profile. Typically, you would import a certificate signed by a public certificate authority or an internal one which is trusted by your clients. However, for this lab we’ll use a self-signed certificate for simplicity’s sake. 1. 2. 3.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to System > File Management > SSL Certificate List and in the upper right corner press Create. On the System > File Management > SSL Certificate List > New SSL Certificate page, enter the following configuration:

System > File Management : SSL Certificate List > New SSL Certificate… General Properties Name TestCertificate Certificate Properties Issuer Self Common Name www.testcertificate.com Division Education Organization Test Corp. Locality London Country United Kingdom Subject Alternative Name DNS:www.testcertificate.com Key Properties Size 2048 When done, click Finished It is very important that you include DNS: in the Subject Alternative Name due to a bug in version 12.1.2 documented in the solution article K14753.

Create a Client SSL Profile 1. 2.

Navigate to Local Traffic > Profiles > SSL > Client and in the upper right corner click on Create. On the Local Traffic > Profiles > SSL > Client > New Client SSL Profile… page, add the following configuration:

311 311


Local Traffic > Profiles > SSL > Client > New Client SSL Profile… General Properties Name Custom_Client_SSL Parent Profile clientssl Configuration Certificate Key Chain 1. Check the custom box and click Add. 2. In the Certificate list, select TestCertificate. 3. In the Key list, select TestCertificate. 4. When done, click Add. When done it should look like the following: /Common/TestCertificate.crt /Common/TestCertificate.key When done, click Finished Create a New Virtual Server 1.

Create a new virtual server containing the following configuration:

Local Traffic > Virtual Servers > New Virtual Server… General Properties Name vs_ssl Destination 10.10.1.102 Service Port 443 or select HTTPS Resources Default Pool https_pool When done, click Finished Verify Behaviour Before Applying the New Client SSL Profile 1. 2. 3.

Open up a browser session to https://10.10.1.102 and accept the SSL Certificate. The page should be loaded and, depending on your browser, you will see that the connection is using SSL. Open up the certificate. How to do this is dependent on your browser (but it’s usually via the F12 key). Check the details of the certificate. Notice the common name, organisation and the validity period of the certificate.

312 312


Expected Behaviour You should be prompted with a security error when accessing https://10.10.1.102. This is because the certificate cannot be validated due to the fact that it is generated by the Apache server itself (self-signed). Don’t worry about the security errors, this is not what we are testing during this lab. When you have accepted the certificate, you should be presented with a web page. After reviewing the details of the certificate, you should notice that it does not contain the same information you entered in the TestCertificate which we created earlier in the lab. Again, this is because the certificate is generated by the server itself.

Assigning the Client SSL Profile to the Virtual Server 1. 2.

3.

Navigate to Local Traffic > Virtual Servers > vs_ssl > Resources and change the Default Pool from https_pool to http_pool and click Update. Open up a new browser session to https://10.10.1.102. This connection attempt should fail, resulting in the error message “Secure Connection Failed”. This is because the BIG-IP system automatically uses the port which the pool members are configured for. This means that the client and the BIG-IP are successfully establishing an SSL session, but when the BIG-IP tries to establish a connection towards the pool member, it does this using port 80, which is not the correct port for HTTPS. This will cause the pool member to immediately terminate the connection because the BIG-IP is trying to establish a connection using a protocol (SSL) that the servers in this pool do not support. Navigate to Local Traffic > Virtual Servers > vs_ssl and change the following configuration:

Local Traffic > Virtual Servers > vs_ssl Configuration SSL Profile (Client) Custom_Client_SSL When done, click Update

313 313


4.

Now perform a hard-refresh (Ctrl+F5) and confirm that you can access the site. You should again be prompted with a certificate error but add an exception to the certificate. This will load the webpage. Check the certificate once again. Has anything changed? Notice that instead of the regular red background you receive when accessing an HTTPS site, you receive a blue background. This is because the communication between the BIG-IP and the end-servers are unencrypted and being sent to the HTTP web servers.

Expected Results At first, the site will not load because the BIG-IP device is trying to establish an SSL session towards the pool members on port 80 which they are not configured to expect (they expect plain HTTP). We then applied the Custom_Client_SSL profile to the virtual which allowed the connection to work and the site to load. The reason this works is that we instruct the BIG-IP system to SSL Offload and terminate the SSL connection instead of sending it straight to the pool member in encrypted form. This causes the connection between the client and the BIG-IP to be encrypted but the connection between the BIG-IP and the pool member is unencrypted.

314 314


Exercise 7.2 – Configuring SSL Bridging Exercise Summary In this exercise, we’ll experiment with SSL Bridging. This is a continuation of the previous lab exercise where we reencrypt the communication between the BIG-IP system and the pool member. Using this method, we can have a complete secure connection between the client and the end-server whilst still allowing the BIG-IP system to gain access to the content being transmitted between the client and the server. In this lab, we’ll perform the following: ▪ ▪ ▪

Create a custom Server SSL Profile. Assign the custom Server SSL Profile to an existing virtual server. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Two or more servers configured on the internal network that can be load balanced to. This should already have been configured during the Building a Test Lab chapter. Created the pool https_pool with at least two members. Created the virtual server vs_ssl with the Client SSL Profile Custom_Client_SSL assigned.

▪ ▪ ▪ ▪

Creating the Server SSL Profile 1. 2. 3.

Open up a browser session to https://192.168.1.245/ and login using the admin credentials. Navigate to Local Traffic > Profiles > SSL > Server and in the upper right corner press Create. On the Local Traffic > Profiles > SSL > Server > New Server SSL Profile page, enter the following configuration:

Local Traffic > Profiles > SSL > Server > New Server SSL Profile… General Properties Name Custom_Server_SSL Parent Profile serverssl When done, click Finished 4.

Navigate to Local Traffic > Virtual Servers > vs_ssl and change the following configuration:

Local Traffic > Virtual Servers > vs_ssl Configuration SSL Profile (Server) Custom_Server_SSL When done, click Update 5. 6.

Navigate to the Resources page of vs_ssl and change the Default Pool from http_pool to https_pool. Open up a new browser session to https://10.10.1.102/. What are the results?

315 315


Expected Results A client accessing an SSL Bridging site will not notice any difference but behind the scenes, the client and BIG-IP establish an SSL session. Once that is complete, the BIG-IP will establish a new SSL session towards the pool member. This allows the BIG-IP system to gain access to the content while keeping the communication between the client and end-server secure. This is beneficial if you need to have high security while still being able to modify data in transit using iRules or profiles. Since we have different colours depending on the communication (HTTP = blue, HTTPS = red), in this case, the change will be that the background will change from blue to red, indicating that the communication between the BIG-IP and pool members are encrypted.

Clean-Up Please perform the following before moving on to the next chapter: ▪ ▪

Remove the Server SSL Profile Custom_Server_SSL from the virtual server vs_ssl Change the default pool of the virtual server vs_ssl to http_pool

Chapter Summary ▪

Physical BIG-IP appliances (i.e not Virtual Editions) comes with an SSL accelerator card that handles SSL encryption/decryption. This allows the BIG-IP system to perform both SSL key exchange and bulk crypto work using its hardware components, rather than software components. Every appliance (virtual or physical) comes with a licence that limits how many SSL transactions per second (TPS) that it will handle.

The BIG-IP system lets you control the SSL traffic that flows in both directions (client-side and server-side) using what are called SSL profiles.

When SSL Offloading is used, the SSL session is terminated on the BIG-IP device and the SSL processing is moved from the end-server out to the BIG-IP system. This increases the performance on the end-servers.

When SSL Bridging is used, the BIG-IP system will re-encrypt traffic before sending it to the end-servers. This enables the BIG-IP system to view and interpret the data that the client is sending in order to implement security & authentication features and manipulate the traffic if necessary, while still maintaining a high security standard.

When SSL Passthrough is used, the BIG-IP system will simply let the SSL session pass through to the endserver without decrypting traffic. In order to configure this, you create a HTTPS virtual server but do not add a Client or Server SSL Profile. The downside of using this method is that you do not get any visibility into the application traffic thus making the BIG-IP system unable to interpret and manipulate it.

316 316


Chapter Review 1. True or False: A PKCS12 file is a formatted archive file that stores both the certificate and the key in a single file. a. True b. False 2. You are the BIG-IP administrator and generate a self-signed certificate which you assign to a Client-SSL profile which is configured on a HTTPS virtual server. When trying to access the site, you are prompted with an error stating that the website is not trusted. What is causing the problem? a. b. c. d.

The virtual server is missing an HTTP profile. The certificate that is being used is self-signed. The virtual server is missing a Server-SSL profile. The virtual server is configured on the wrong service port.

3. What statements are true regarding SSL Offloading? a. b. c. d.

It’s only available for BIG-IP appliances. It creates a secure tunnel between the client and the end-server while still improving performance. It allows for easier management of certificates for administrators. It increases the performance of the end-servers.

4. What SSL profiles do you need to assign to the virtual server in order to use SSL Bridging? a. b. c. d.

None. Using this method, the client will establish an SSL session directly with the pool member. Only the Client SSL Profile. Both the Client SSL Profile and Server SSL Profile. Only the Server SSL Profile.

317 317


Chapter Review: Answers 1. True or False: A PKCS12 file is a formatted archive file that stores both the certificate and the key in a single file a. True b. False The correct answer is: a When generating a PKCS12 file, be sure to generate a strong password. Since the PKCS12 file includes the private key, if a malicious user obtained this you may face a security breach. 2. You are the BIG-IP administrator and you generate a self-signed certificate which you assign to a Client-SSL profile which is configured on a HTTPS virtual server. When trying to access the site you are prompted with an error stating that the website is not trusted. What is causing the problem? a. b. c. d.

The virtual server is missing an HTTP profile. The certificate that is being used is self-signed. The virtual server is missing a Server-SSL profile. The virtual server is configured on the wrong service port.

The correct answer is: b The problem with self-signed certificates is that they are not trusted by web browsers. This is because they are generated by the BIG-IP device itself and not an authorised CA. 3. What statements are true regarding SSL Offloading? a. b. c. d.

It’s only available for BIG-IP appliances. It creates a secure tunnel between the client and the end-server while still improving performance. It allows for easier management of certificates for administrators. It increases the performance of the end-servers.

The correct answers are: c and d With SSL Offloading, the processing of SSL Traffic is moved from the end-servers to the BIG-IP system. If the servers do have an SSL accelerator card the processing overhead is not relevant, however the price is of such cards is high. SSL Offloading also provides easier management of SSL certificates. Only a single certificate is needed for one pool of end-servers and that means that you will only have to monitor the certificates on the BIG-IP system.

318 318


4. What SSL profiles do you need to assign to the virtual server in order to use the SSL Bridging method? a. b. c. d.

None. Using this method, the client will establish an SSL session directly with the pool member. Only the Client SSL Profile. Both the Client SSL Profile and Server SSL Profile. Only the Server SSL Profile.

The correct answer is: c In order to activate SSL Bridging, a Server SSL Profile will have to be assigned to the virtual server together with a Client SSL Profile. This causes the BIG-IP system to establish another SSL session with the end-server before sending the sensitive data.

319 319


11. NAT and SNAT As we have previously discussed in the book, a virtual server is an object that acts as a listener and processes traffic between clients and pool members. It listens for specific traffic and performs address translation amongst other functions. Virtual servers are not the only objects that can be created to listen for traffic; NATs and SNATs can also do so. Virtual servers, NATs and SNATs can all also perform address translation as part of their traffic processing but they do so somewhat differently. Each is used in specific scenarios and can help you solve various challenges that you may encounter in different environments.

Network Address Translation – NAT There are some scenarios where you want client requests to bypass the normal load balancing selection and instead be sent directly to a specific internal node. The internal node is most likely configured with a private IP address that is not routable on the Internet. This means that when you are trying to establish a connection to the internal node from the Internet, address translation has to be used. To enable this, you will have to configure a NAT. NAT provides a one-to-one mapping between two IP addresses. For instance, between a private internal IP address and an external public IP address. This means that if an external client sends a request to the public IP address on which the NAT is listening, it will be automatically translated to the internal IP address that is defined in the NAT. Traffic is then forwarded (or routed if necessary) to the node with that internal address. The same concept can be applied when the internal node communicates with external nodes. The internal IP address is automatically translated to the external IP address thus making the internal IP address “hidden”. This is something that some refer to as “Hide-NAT”. This means that a NAT is bi-directional as shown in the following diagram:

320 320


When configuring a NAT, you will only have the option to map one IP address to another. If there is a need to create a many-to-one mapping you will need to create a SNAT instead. NATs do not support port translation; all ports are open, and translation isn’t required (and would probably cause unexpected issues). NAT is also not appropriate for protocols that have embedded IP addresses in their packets. FTP, NT Domain or CORBA IIOP are some examples that should not be used with NAT. NATs present a potential security risk as they only provide a one-to-one mapping and cannot be restricted to specific ports. This means that all listening ports on the internal node will be exposed through the NAT. This can be mitigated by using a SNAT instead.

321 321


Traffic Flow When Using a Virtual Server on Inbound Connections When using a virtual server as a listener, traffic would be handled in the following matter: 1.

An external client sends its requests to a virtual server configured on the BIG-IP system. The virtual server is configured with an IP address and usually a specific port.

2.

When the virtual server receives the traffic, it will make a load balancing decision based on the load balancing algorithm. A pool member will be selected, and the BIG-IP system will initiate a separate connection towards that pool member destination IP address and destination port. In some scenarios, the destination port for the virtual server and the pool member might be different. The source (client’s) IP address will remain the same.

3.

The pool member will send its responses through the BIG-IP system which will match it with an existing session. Since the BIG-IP is using a Full Proxy Architecture it will use the already established external TCP connection to respond back to the client. Meaning we’ll use the source IP address and source port of the virtual server instead of the pool member. It does this because the client expects return traffic from the virtual server’s IP address and source port since this is ‘where’ the client sent its requests to.

322 322


Traffic Flow When Using NAT on Inbound Connections When an external client needs to be able to establish a connection to an internal server you usually need to translate the destination address. As we mentioned earlier this is because the internal server is most likely configured with a private IP address that is not routable on the Internet. The NAT acts as a listener and creates a one-to-one mapping between the external IP address and the internal IP address as described next: 1.

An external client sends a request to the NAT IP address that is configured on the BIG-IP system. The NAT acts as a listener and will match any traffic received on that specific IP address regardless of the port. This is unlike most virtual servers where you usually configure a specific port and is one of the reasons why a NAT listener is considered to have lower security.

2.

When the BIG-IP receives the traffic, it matches it against the NAT object. It then simply translates the destination IP address (NAT Address) to the private node IP address (the Origin Address). There are no load-balancing, port translation or intelligence applied to the traffic.

3.

When the server responds back through the BIG-IP system the reverse translation is performed. The private node IP address (Origin Address) is translated into the external IP address (NAT Address). The BIG-IP system will use the NAT address as its source IP address when responding back to the client.

323 323


Traffic Flow When Using NAT on Outbound Connections There are times when you have a node on the internal network that needs to access a resource on the Internet. In this scenario, it is the internal node that initiates an outbound connection. This scenario occurs when the internal node uses the BIG-IP system as its default gateway. In order for the BIG-IP system to intercept and process this traffic, you need to configure a listener to pick up this traffic. Remember that the BIG-IP system is a default deny device and will drop all traffic that is not matched against a listener. As in the previous scenario, the internal node will most likely be configured with a private IP address and this must be translated before it can be sent out to the Internet. In order to enable this, we can configure a NAT listener (object). A common scenario where you might want to configure a NAT for outbound connections is to download updates on internal web servers that use the BIG-IP system as its default gateway. In our example, the internal node with the IP address of 172.16.20.1 needs to download its latest operating system updates from an external Internet based server. This scenario is described in the following diagram: 1.

The internal node with IP address 172.16.20.1 sends a request to an external resource on the Internet. The BIG-IP system will receive this traffic because the internal node uses the BIG-IP as its default gateway.

2.

When the BIG-IP system receives the traffic, it will match this against the Origin Address in the NAT. It will then translate the internal node’s IP address to the configured NAT Address and forward the request to the external node. Note that the NAT Address is not the destination address. The BIG-IP system translates the source IP address 172.16.20.1 to the configured NAT Address. In this case, from 172.16.20.1 to 1.1.1.200.

3.

The external update server sends back a response to the NAT Address and the BIG-IP system receives this traffic. The BIG-IP system will then reverse translate the NAT Address to the Origin IP Address and forward the traffic to the internal node.

324 324


Just like with NAT for inbound connections, a NAT listens to all ports. This means that the BIG-IP system will intercept and process any traffic as long as it matches the Origin IP address.

Disadvantages of Using NAT As we have mentioned a few times already, NAT listens to any port which means that it will intercept and process all and any traffic that matches the NAT Listener (Origin Address). This is a major security concern, but this can be mitigated by using SNAT instead, something we’ll cover later in this chapter. Another disadvantage arises if you have more than one internal node that needs a NAT (a public IP address). Since NAT only supports a one-to-one mapping you will need to create one for every internal node and each NAT would need an external IP address. This creates a major administrative overhead and consumes many external IP addresses which are in very short supply. NAT is also bi-directional, which poses a further security risk. Using the outbound connection example above where we create a NAT for an internal node to be able to download updates, an external client can just as easy connect to the internal node using any port and the NAT address. This is because a NAT listens for traffic on both sides of the BIG-IP system. Again, this can be mitigated with the use of SNAT instead.

325 325


However, there is a configuration option for the NAT Listener called VLAN / Tunnel Traffic. This option defines on which VLAN the NAT Listener should accept traffic. In order to mitigate the security flaws from the earlier example, we can define that the NAT Listener should only be available on the Internal VLAN. This would limit the external clients trying to access the internal node. But the general guideline from F5 still applies which states that you should still use SNAT over NAT in terms of security.

Finally, NAT connections are not tracked by the BIG-IP system. This means that the public IP address which you assign to the NAT object cannot be reused as a virtual server or SNAT address. As there are so many disadvantages with NAT, they are rarely used.

NAT Traffic Statistics To view statistics on traffic that is being processed by NAT objects, in the WebGUI go to Statistics > Module Statistics > Local Traffic. From the Statistics Type pull down menu, choose NATs. You can also view the statistics from using tmsh using the following command:

# tmsh show ltm nat all

326 326


Source Network Address Translation – SNAT Source Network Address Translation, or Secure Network Address Translation, is a combination of Network Address Translation and Port Address Translation (NAPT). Unlike NAT, SNATs allow for a many-to-one mapping and have many other advantages. One of these advantages is that SNAT provides a more secure mechanism when translating internal IP address to publicly routable external IP addresses and vice versa. This is because SNAT is unidirectional and they only listen for traffic coming from a specified origin address and not destined to the SNAT address. SNAT also allows you to use one externally routable IP address for many different nodes on an internal network. This is achieved using port translation to create the uniqueness necessary to translate multiple internal IP addresses. This enables you to conserve precious public IP addresses. SNAT is also often used to solve routing complexities that arise when using a BIG-IP system in complex network environments, something we’ll cover later in this chapter.

Why We Need SNAT In a typical scenario, an external client connects to the IP address of a virtual server and a connection is established. A pool member is then selected, and another connection is established towards that pool member’s IP address. When looking at the connection flow, the destination IP address is different depending if the connection is on the external or internal side. However, the source IP address will always remain the same as this is the default behaviour of the BIGIP system. When the pool member responds, the BIG-IP system will reply back to the client using the external connection. In other words, the traffic will originate from the virtual server IP address. This is necessary in order for the connection flow to work because the IP address that is used by the pool member is most likely private and cannot be routed on the Internet. Secondly, the client connected to the destination IP address of the virtual server and will expect a reply back from the same IP address. This is essential for the communication to work successfully. This is presented in the following diagram:

327 327


1.

The client will establish a TCP connection to the virtual server and send its HTTP request; the destination IP address being the virtual server IP address.

2.

When the BIG-IP system receives the HTTP request from the client, it selects a pool member based on the load balancing algorithm and initiates a separate connection towards that pool member destination IP address. The source (client’s) IP address will remain the same.

3.

The pool member will respond back to the BIG-IP system and it will match it against the existing session.

4.

Once the BIG-IP system has matched pool member’s reply with an existing session, since the BIG-IP is using a Full Proxy Architecture it will use the external TCP connection to respond back to the client. Meaning we’ll use the source IP address of the virtual server instead of the pool member. This is to ensure that we do not break the flow of the communication.

328 328


The communications between the client and the end-server totally rely on the Full Proxy Architecture and that the IP address change depending on which TCP connection is being used as presented in the previous diagram. Therefore, it is essential that traffic that passes through the BIG-IP system is routed back through the BIG-IP system. Traffic will be automatically returned back through the BIG-IP system when the following criteria are met: ▪ ▪ ▪

The end-servers are on the same subnet as the BIG-IP system. The clients sending the requests are on a different subnet than the server nodes. The end-servers use the BIG-IP system as their default gateway.

However, there might be scenarios and situations where the above criteria are not met, and other requirements need to be considered. What happens if the clients and servers are on the same subnet? Servers will send their responses directly back to the clients (based on the unmodified source IP address of the requests) without the traffic passing through the BIG-IP system. A client will drop this traffic, as it expects the traffic to come from the virtual server IP address and is unaware of the real server(s). What happens if the end-servers needs to be configured to use a default gateway other than the BIG-IP system? Since the client source IP address is not translated when the request from the BIG-IP system is forwarded to the pool member, the pool member will send the response back through its default gateway instead of the BIG-IP causing asymmetric routing and most likely packet loss.

We discuss asymmetric routing in the next section

Typical Uses of SNAT Pool Member’s Default Gateway is Not the BIG-IP system One of the most common needs for SNAT is where the end-servers (pool members) are not using the BIG-IP system as their default gateway. When the client connects to the virtual server, it establishes a connection between itself and the BIG-IP system. The BIG-IP system then selects a pool member based on the load balancing algorithm and forwards the request to the pool member. By default, the BIG-IP system only translates the destination IP address which means it translates the virtual server IP address to the pool member IP address. The client source IP address remains the same. Consider the scenario where the pool member uses a default gateway other than the BIG-IP system. It receives traffic from the client’s IP address, and when it responds, it will check its routing table for an entry for the client’s IP address or network. It will most likely not find one and will therefore use its configured default gateway instead. You could configure routes on every pool member for every known client network, but this is usually impractical (particularly for internet facing services) and prone to error and omissions. The response will be sent to the default gateway which in this case is a firewall. Since the firewall is using stateful packet inspection, it will drop the packets sent to it, as they do not relate to a connection of which it is aware (has seen established through it). This will mean that the response traffic from the pool member will never reach the end-client.

329 329


This scenario, where traffic is sent from a source to a destination on one path and takes a different path when it returns to the source is known as asymmetric routing. This scenario is described in the following diagram:

1.

The client establishes a TCP session towards the virtual server on the BIG-IP system and sends its HTTP request. The destination IP address being the virtual server IP address.

2.

When the BIG-IP system receives the HTTP request from the client, it selects a pool member based on the load balancing algorithm and initiates a separate connection towards that pool member destination IP address. The source (client’s) IP address will remain the same.

330 330


3.

The pool member will respond back to the client, but since the pool member’s default gateway is not the BIGIP system, the response will be sent to a firewall instead.

4.

The firewall is using stateful packet inspection and will therefore drop the packets sent to it, as they do not relate to a connection of which it is aware (has seen established through it).

5.

Since the packets are dropped at the firewall, they will never reach the end-client and the webpage will fail to load.

We resolve this issue by enabling SNAT on the virtual server. With SNAT enabled, the BIG-IP system will translate both the destination IP address (by default) and the source IP address. This mean that when the BIG-IP system establishes the internal TCP connection to the pool member it will translate the virtual IP address to the pool member IP address and the client IP address to the BIG-IP address. When the pool member receives the request, it will send its response back to the BIG-IP system because the request originated from a source IP address which is owned by the BIG-IP system. Once the BIG-IP system has matched pool member’s reply with an existing session, since the BIG-IP is using a Full Proxy Architecture, it will use the external TCP connection to respond back to the client. Meaning we’ll use a different source and destination IP addresses than the ones used in the internal TCP connection. The whole scenario is described in the following diagram:

331 331


1.

The client establishes a TCP session towards the virtual server on the BIG-IP system and send its HTTP request. The destination IP address being the virtual server IP address.

2.

The BIG-IP system receives the request and, since SNAT is enabled on the virtual server, it will translate both the destination IP address from the virtual server address to the pool member address and the source IP address from the client IP address to the floating self-IP address configured on the BIG-IP system. These addresses will be used for the internal TCP connection which will be established by the BIG-IP system.

332 332


3.

The pool member receives the request and responds to the source IP address, which is the floating self-IP address of the BIG-IP system.

4.

The BIG-IP system receives the response from the pool member and matches it against the existing session. Since the BIG-IP is using a Full Proxy Architecture, it will use the external TCP connection to respond back to the client. Meaning we’ll use a different source and destination IP addresses than the ones used in the internal TCP connection.

5.

The client receives the response back from the BIG-IP system and since the source IP address matches its initial request and an active session the client will accept the response. When SNAT is used, the IP address used for the translation is different depending on what SNAT mode you use. You can either create a list of usable IP addresses called an SNAT Pool or use SNAT Automap where mainly the floating self-IP for the external (outbound) VLAN is used for the translation. In the previous example, we used SNAT Automap. We’ll discuss both concepts later in this chapter.

Both Client and Pool Member Reside on the Same Network In some scenarios, both the clients and pool members reside on the same subnet, but you still want traffic to flow through the BIG-IP system to load balance and more. If the client sends its requests to the BIG-IP system, the default behaviour would be to not translate the source IP address (client IP address). The BIG-IP system would just establish a connection to the pool member destination IP address and send the client requests to the pool member using that internal connection. The pool member then sends its responses directly to the client. When the client receives these responses, it will not recognise that they are related to the connection it established to the virtual server (or anywhere else for that matter) and drop them. This scenario is displayed in the following diagram:

333 333


1.

The client sends its request to the virtual server located on the BIG-IP system. The destination IP address being the IP address of the virtual server.

2.

When the BIG-IP system receives the HTTP request from the client, it selects a pool member based on the load balancing algorithm and initiates a separate connection towards that pool member destination IP address. The source (client’s) IP address will remain the same.

3.

The pool member examines the source IP address and determines that it is located on the same network as itself and finds the MAC address entry for the client and then sends the response back directly to the client.

4.

The client receives the response, but since the response is coming from a different source IP address and is not part of an active session, it has established it will drop the response. Therefore, the client will not receive the object it has requested.

In order to solve this, we again activate SNAT on the virtual server which means that the BIG-IP system will translate the client IP address to one of its own addresses. This will result in the pool member sending the response through the BIG-IP system instead of sending it directly to the client. This is demonstrated in the following scenario:

334 334


1.

The client sends its request to the virtual server located on the BIG-IP system. The destination IP address being the IP address of the virtual server.

2.

The BIG-IP system receives the request and since SNAT is enabled on the virtual server it will translate both the destination IP address from the virtual server address to the pool member address and the source IP address from the client IP address to the floating self-IP address configured on the BIG-IP system. These addresses will be used for the internal TCP connection which will be established by the BIG-IP system.

3.

The pool member receives the request and examines the source IP address. It determines that the source IP address belongs to the BIG-IP system and therefore sends the response back to the BIG-IP system.

4.

The BIG-IP system receives the response from the pool member and matches it against the existing session. Since the BIG-IP is using a Full Proxy Architecture it will use the external TCP connection to respond back to the client. Meaning we’ll use a different source and destination IP addresses than the ones used in the internal TCP connection.

5.

The client successfully receives the response back from the BIG-IP system and since the source IP address matches the initial request and the active session the client will accept the response.

335 335


When SNAT is used, the IP address used for the translation is different depending on what SNAT mode you use. You can either create a list of usable IP addresses called an SNAT Pool or use SNAT Automap where mainly the floating self-IP for the external (outbound) VLAN is used for the translation. In the previous example, we used SNAT Automap. We’ll discuss both concepts later on in this chapter.

Internal Nodes in a Private Subnet Need to Share One External IP Address Another example of an issue SNAT resolves would be where there are multiple internal nodes located in a private subnet with addresses that are not routable on the Internet. These internal nodes need to be able to access the Internet, but you only have one external IP address. To solve this, you create a SNAT object on the BIG-IP system. The SNAT object will act as a listener and process traffic it receives if it matches its configuration. The SNAT object will translate the internal node’s IP (origin) addresses to the NAT IP address that is configured for the SNAT object. The external node responding to the internal node’s request will be able to use the externally routable translated NAT IP address when it is sending back its response. The whole process is described in the following diagram:

336 336


1.

The node sends a request to a resource located on the Internet. Since the node is configured with an internal IP address that is not routable on the Internet, the source address needs to be translated into a routable IP address so that the external resource can reply back to the node.

2.

The BIG-IP system receives the request and matches the IP address configured on the node with one of the Origin Addresses in the SNAT object. The BIG-IP system will then translate the Origin Address (source address of the node) to the NAT Address configured for the SNAT object.

3.

When the translation is complete, the BIG-IP system will route the request to the external resource located on the Internet. When the external resource responds back the to the BIG-IP system it will match the response with the current session and perform a reverse translation where the destination address is translated from the NAT Address to the Origin Address (node address).

How to Configure SNATs As you can see, SNAT can be used to solve many different network and routing issues and depending on what the issue is, they need to be configured through different methods. There are two ways you can configure SNAT. ▪ ▪

As a configuration setting on a virtual server. As its own object (listener) in the WebGUI using menu path; Local Traffic > Address Translation.

SNAT Listener To configure a SNAT listener object, go to Local Traffic > Address Translation > SNAT List and click Create. Here, you can specify the Name of the SNAT object, Translation and Origin IP addresses. The Translation field contains the IP address to which client’s source addresses will be translated, and the Origin field contains the source IP address(es) that will be translated by this SNAT Object. In the Translation field, you can specify an IP address, use SNAT Automap or a SNAT Pool, which we’ll describe how to create in the following sections.

SNAT Automap will be described in greater detail later in this chapter.

SNAT Translation List Under Local Traffic > Address Translation > SNAT Translation List you can view all of the existing translation SNAT addresses that have been configured on the BIG-IP system. In this view, you can see if any particular SNAT is enabled and what NAT IP address translations it will use. If you click on the SNAT object, you can enable or disable it and also specify if it should synchronise the connection state with the standby device if high-availability is configured. It is also possible to specify whether the BIG-IP system should reply to ARP requests for the particular IP address, and also set a connection limit.

337 337


SNAT With a Virtual Server When you configure SNAT on a virtual server, the source IP address of the server-side connection will be translated. When the BIG-IP system establishes the connection to the pool member, it changes the client IP address to an IP address which the BIG-IP system owns. This will ensure that the response traffic will be sent back to the BIG-IP system. As we mentioned previously, SNAT is a unidirectional address translation which means it only translates the source address, based on the source IP address. In the case of a virtual server, this is the source IP address of the clients. When enabling SNAT on a virtual server, the translation address that is used depends on what SNAT method you configure and its characteristics. You can either use SNAT Automap, in which case the egress VLAN is a factor or a SNAT Pool, where you specify the translation address(es). We’ll cover these in the following sections.

SNAT Pool A SNAT Pool is a collection of IP addresses that you manually specify. These IP addresses will then be used for the address translation. In order to configure a SNAT Pool on a virtual server, you will first have to create a SNAT Pool that includes the IP addresses that you would like to use. To configure a SNAT Pool, go to Local Traffic > Address Translation > SNAT Pool List and click Create. Here, you will give the SNAT Pool a name and specify the NAT IP addresses that will be available in the pool. When you have added the IP addresses, create the pool by clicking Finished.

A SNAT Pool can also be referenced on a SNAT Listener.

If multiple addresses are assigned to a SNAT Pool, it will use the Least Connections load balancing algorithm to choose the next IP address in the pool.

In the following diagram you can see how a translation can be performed using a SNAT Pool:

338 338


339 339


SNAT Auto Map SNAT Auto Map is the most common address allocation method used when configuring SNAT on a virtual server. The reason for this is that it is easy to understand and enable. One of the reasons for that is because you can use self-IPs that are already assigned to and configured on the BIG-IP system. The BIG-IP system can use any of the self-IPs configured on the device to perform SNAT. When SNAT Automap is enabled, the preferred IP address that is used is the floating IP address. The floating self-IP address is the same as a regular self-IP address, but it is shared among multiple BIG-IP devices if they are configured in a High-Availability setup. In our previous examples, this means that the client IP address will be translated to that of the BIG-IP system’s floating self-IP address before it is forwarded to the pool member. This is displayed in the following diagram:

340 340


To determine what self-IP is a floating IP, you can review the configuration of Network > Self IPs. A floating IP address is part of a traffic group that is shared among two BIG-IP systems in a High Availability setup. If the BIG-IP system is not configured in a High-Availability setup, then it will simply use the non-floating self-IP addresses.

We’ll discuss High Availability in the next chapter.

A non-floating IP address is identified by its traffic group which is by default called traffic-group-local-only. An example is displayed in the following diagram:

In summary, the BIG-IP device will perform the following selection of addresses when Automap is enabled: 1. 2. 3. 4.

The floating self IP address of the egress VLAN. The egress VLAN being the VLAN used to reach the pool members The floating self IP address of a different VLAN The non-floating self IP address of the egress VLAN The non-floating self IP address of a different VLAN

The reason why the floating-IP addresses is always preferred is because in an HA setup, if a failover occurs then the traffic will be sent to the active node because the traffic is sent to the floating IP address that has moved during the failover. The only thing that has changed is the MAC address associated with the floating IP. There is a feature available called MAC Masquerading where you create a virtual MAC address for each traffic-group. This MAC address will be unique and float between the BIG-IP devices configured in a High-Availability setup. In those cases, the MAC address will be the same at all times, but the ownership of this MAC address will change depending on which BIG-IP device has the ownership of the traffic-group. We’ll discuss MAC Masquerading and how you configure it in greater detail in the High-Availability chapter. In some network scenarios, using the floating self-IP address of a different VLAN (instead of the non-floating IP of the egress VLAN) might cause more harm than good because you do not have the correct routing configuration in your environment for the pool members to successfully return the traffic back to the BIG-IP system. F5 recommends to always configure a floating self IP address on the VLANs where SNAT Automap will be used to translate egress traffic. Another valid option if a floating self IP address is not configured, is to use a SNAT Pool instead where you can specify exactly what IP address you want SNAT to use. The SNAT pool will contain IP addresses that are a part of the subnet for the egress VLAN.

341 341


How to Enable SNAT Auto Map on a Virtual Server In order to enable SNAT Auto Map on a virtual server you simply enable the option on the virtual server as shown:

Potential Issues for Server Applications When SNAT Translation is Used One of the most common issues encountered when using SNAT to translate the client IP address into the BIG-IP system’s IP address is that you lose the original client IP address. Some end-servers are required to log the client IP address when they are requesting an object in order for the application to work. For those servers, when SNAT is used the client IP address will always be the BIG-IP system’s IP address. In order to overcome this issue, it is possible to add an HTTP header called X-Forwarded-For (aka XFF). The XForwarded-For header value contains the client’s IP address (before SNAT) and is added to the HTTP request being sent to the pool member (end-server). The end-server can then interpret the X-Forwarded-For header value and log this data instead. To enable the X-Forwarded-For header: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Log on to the WebGUI. Go to Local Traffic > Profiles. In the Services menu, click HTTP. Click Create. Type in the name you would like for your profile. Select the Insert X-Forwarded-For check box. From the Insert X-Forwarded-For menu select Enabled. Click Update. Go to the Virtual Server that you would like to enable the X-Forwarded-For header and select the HTTP Profile you just created.

Port Exhaustion As we mentioned earlier in the chapter, SNAT performs both address and port translation. For every connection the client IP address is translated to one of a pool of addresses or if Auto Map is used, to the self IP address of the BIG-IP system. During the translation, the client’s source port will also be mapped to an available port for the translated address. By default, the BIG-IP system will try and use the same source port used by the client but if that source port is already in use by another connection, a different, free port will be used. It is possible to control this behaviour on a per-virtual-server and SNAT basis. The following options are available;

342 342


Preserve - This is the default behaviour. The BIG-IP system will preserve the source port during the translation unless that port is in use.

Preserve Strict - This setting will preserve the source port even if it is in use by another connection. If the port is in use, the system does not process the connection. It will simply send back a TCP RST to the client. F5 recommends that you restrict the use of this setting in order to minimise the number of connection resets (collisions) that occur. Use this setting if at least one of these conditions are met: ▪ ▪ ▪ ▪ ▪

Clustered multi-processing (CMP) is disabled. The port is configured for UDP traffic for 9.6.1 to 11.2.1. The port is configured for UDP or TCP for 11.3.0 and later. There is a one-to-one mapping between the virtual server IP address and the node address. The system is running in transparent mode (there is no translation of any other Layer 3 or Layer 4 field) or is configured for nPath routing.

Some applications require that the same source port is preserved such as with nPath routing and some Session Initiation Protocol (SIP) implementations. nPath routing enables you to route outgoing server traffic around the BIG-IP system, sending responses directly back to a router or client. This traffic management method increases outbound throughput because the packets do not need to be transmitted to the BIG-IP system in order to be translated and forwarded to the next hop. ▪

Change - This setting specifies that the system should always use the next available port. It does not attempt to preserve the source port at all. By using this option, you make sure that a unique port is chosen for each new connection. This can be very helpful in some scenarios where the servers TIME-WAIT windows are longer than usual. In those scenarios, this option may help to avoid premature port reuse.

How to Change the Source Port Preservation for Virtual Servers In order to change the source port preservation for a virtual server; 1. 2. 3. 4. 5. 6.

Log in to the BIG-IP WebGUI. Go to Local Traffic > Virtual Servers. Click on the Virtual Server which you want to change. In the Configuration menu, select Advanced. Change the Source Port setting to your desired value. Click Update.

343 343


Socket Pairs In network terminology, a socket is a combination of an IP address and a port number assigned to one endpoint (client or server). This socket is part of a two-way communication link between two applications (the application running on the client and the application running on the server) on a network. A socket is bound to a specific port so that the connection can be matched to a certain application where the data is supposed to be sent to. The combination of two sockets (client and server) is referred to a socket pair and consists of the following: ▪ ▪ ▪ ▪

Source IP address Source port Destination IP address Destination port

Each SNAT translation address is limited to 65,535 port numbers. This is a design limitation of the UDP and TCP protocols since both use a 16-bit value in their header fields (2^16 = 65,535) to specify the destination and source port. This means that each IP address is limited to only 65,535 SNAT translations (and thus unique connections). This may sound like a great deal, but in some networks and scenarios, it may not be enough. SNAT can, however, theoretically process more than 65,535 connections as long as each socket pair is unique. Any given SNAT address can reuse an in-use source port as long as the remote socket (destination address and port) are unique. As long as it is, return packets can be correctly matched to the correct connection. This means that SNAT is no longer limited to a maximum 65,535 concurrent connections. The example below demonstrates how this can work in a real-life scenario: SNAT Address:Port 10.10.1.33:6789 10.10.1.33:6789 10.10.1.33:6789 10.10.1.33:6789 10.10.1.33:6789

Destination Address:Port 192.168.20.1:80 192.168.20.1:443 192.168.20.2:80 192.168.20.2:443 192.168.20.3:22

Port Exhaustion on a Virtual Server By default, when the BIG-IP system forwards traffic to a pool member, the destination address and port are translated to the pool member’s configured settings. When enabling only one SNAT address on the virtual server, you are limited to 65,535 concurrent connections to each pool member, not overall. Since each pool member has its own IP address and port, this will make the socket pair unique and the SNAT will be able to establish 65,535 connections to each pool member. Exceeding the limit is referred to as port exhaustion and no new server-side connections can be established when this occurs. As nodes may be configured in multiple pools and SNAT addresses used with multiple virtual servers, it’s safer to think of 65,535 connections as the maximum capacity for any SNAT address. If you need more than 65,535 connections, you can perform the following: ▪ ▪

When using a SNAT Pool - Add more NAT addresses to the SNAT Pool When using SNAT Automap - Add more floating/self-IP addresses to the BIG-IP system to the egress VLAN.

344 344


Monitoring Port Exhaustion There is a way to monitor port exhaustion when it occurs but, obviously, it’s then too late to avoid its impact. Whenever port exhaustion occurs, an error message appears in the /var/log/ltm. In the following output, you can see an example of how it may appear in the log:

01010201:2: Inet port exhaustion on 10.10.1.33 to 192.168.20.1:80 (proto 6) 01010201:2: Inet port exhaustion on 172.16.0.45 to 192.168.20.3:53 (proto 17) This log message will appear whenever TMM detects a port exhaustion for a traffic management object and is not specific to SNAT.

Lab Exercises: NAT and SNAT Exercise 8.1 – Configuring NAT to Directly Access an Internal Node Exercise Summary In this exercise, we’ll experiment with NAT. Instead of using a virtual server as a listener, we’ll use a NAT. The NAT will perform a one-to-one mapping between the external NAT IP address and the internal node address. In this lab, we’ll perform the following: ▪ ▪

Create a NAT. Observe its behaviour.

Exercise Prerequisites Before you start this lab, exercise make sure you have the following: Network access to the BIG-IP system’s management port. One server configured on the internal network that can be load balanced to. This should already have been configured during the Building a Test Lab chapter. The server should be running multiple services.

▪ ▪

Configuring the NAT 1. 2. 3.

Open a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Local Traffic > Address Translation > NAT List and in the upper right corner press Create. On the Local Traffic > Address Translation > NAT List > New NAT page, specify the following configuration:

345 345


Local Traffic > Address Translation > NAT List > New NAT … General Properties Name NAT_to_1 NAT Address 10.10.1.200 Origin Address 172.16.100.1 When done, click Finished 4.

Navigate to the Local Traffic > Statistics page and select the Statistics Type NATs. The NAT you just created should appear in the list but without any data. 5. Open a new browser session to http://10.10.1.200 and perform a hard refresh using Ctrl+F5 5-10 times in order to make sure that you are not accessing the site using any cached content. 6. Go back to your BIG-IP system and click Refresh on the statistics page in order to get the latest data. Notice the change in the bits, packets and connections statistics. 7. Open up a new browser session but this time use HTTPS instead. Go to https://10.10.1.200 and perform a hard refresh 5-10 times. 8. Head back to the BIG-IP statistics page and click on Refresh. Did the statistics change? Why, or why not? 9. Open a terminal program (like PuTTY) and establish an SSH connection to 10.10.1.200. Log in using student/student. 10. Head back to the BIG-IP statistics page and click on Refresh. Did the statistics change? Why, or why not?

Expected Results In this lab, you should be able to access all services on 10.10.1.200. This is because the NAT listens on all ports instead of a specific port thus making NAT a very bad choice security wise. With a virtual server you can specify what port you should listen on which increases the security. You should also only connect to the node 172.16.100.1 as this is the one we configured as the Origin Address. Creating a NAT also allows traffic from the Origin Address to the NAT Address (in the opposite direction). In other words, the node 172.16.100.1 should now have access to the Internet as well but in our lab environment we have prohibited this. If you are unable to connect to the NAT, verify that the NAT and Origin Addresses are correctly specified.

Clean-Up ▪ ▪

Close all of your connections to the NAT Delete the NAT_to_1 object and confirm that you are once again unable to access the node 172.16.100.1 directly.

346 346


Exercise 8.2 – Enabling SNAT Auto Map on a Virtual Server Exercise Summary In this exercise, we’ll experiment with the SNAT Auto Map feature. By default, the BIG-IP system does not translate the client IP address when establishing a connection towards a pool member. In order for communication to work, the pool members should return traffic back to the BIG-IP system but if they use a different default gateway asymmetric routing occurs and communications will fail. The SNAT Auto Map feature will translate the client IP address to the floating self-IP address of the egress interface in order to make sure that the traffic is returned to the BIG-IP system. In this lab we’ll perform the following: ▪ ▪

Apply the SNAT Auto Map feature on an existing virtual server. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. One server configured on the internal network that can be load balanced to. This should already have been configured during the Building a Test Lab chapter. The server should be running multiple services. Created the virtual server vs_http. Created the virtual server vs_https.

▪ ▪ ▪ ▪

Verifying the Current Behaviour (Without SNAT Auto Map) 1. 2.

Open up two browser sessions, one to http://10.10.1.100 and another to https://10.10.1.100. On each page, you will be presented with the text named “Client IP address”. This is the source IP address of the client connected to the server, the one you are presently using to access the web page (through the virtual server).

Configuring SNAT Auto Map 1. 2.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Navigate to Local Traffic > Virtual Servers > vs_http and add the following configuration:

Local Traffic > Virtual Servers > vs_http Configuration Source Address Translation In the scroll-down list select Auto Map When done, click Update

347 347


Verifying Configuration Changes 1.

2.

On the browser session connected to http://10.10.1.100 perform a hard refresh using Ctrl+F5. The source IP address should now be updated with the IP address 172.16.1.33 which is the IP address of the floating self-IP address of the BIG-IP system assigned to the internal VLAN. Compare the results to the browser session you have towards https://10.10.1.100. Is there any difference?

Expected Results On the vs_http virtual server, you should see a change of the source IP address from your client IP address to the BIG-IP’s floating self-IP address. This will cause the pool members to always respond back to the BIG-IP system no matter, where the client requests originated. Since we left the virtual server vs_https unchanged the source IP address remained the IP address of the client computer.

Clean-Up ▪

Configure the Source Address Translation setting on vs_http from Auto Map to None.

Chapter Summary ▪

Virtual servers are not the only objects that can be configured to listen for traffic. The additional objects are NAT and SNAT. Virtual servers, NAT and SNAT can all do address translation as part of their traffic processing, but they do it somewhat differently; each can all be used in specific scenarios that will help you solve various challenges that may occur in your environment.

When you configure SNAT on a virtual server, the source IP address of the server-side connection will be translated. This means that before the BIG-IP system forwards the traffic to the pool member, it changes the client IP address to an IP address that the BIG-IP system owns. This will ensure that response traffic will be sent back to the BIG-IP system.

When enabling only one SNAT address on a virtual server, you are limited to 65,535 concurrent connections using that source address, to each pool member. This is possible as each pool member has its own IP address and port allowing for unique socket pairs to be used to track each connection. The uniqueness may be different depending on your configuration, so consider 65,535 connections as an easy reminder of the maximum capacity for any SNAT address.

348 348


Chapter Review 1. What are the risks of using NAT as a translation method? a. b. c. d.

It might cause an uneven load for the pool members. It might cause asymmetric routing. It might overload the connection table. The BIG-IP system will accept any traffic as long as it matches the NAT address.

2. Which of the following address mappings does SNAT use? a. b. c. d.

Many-to-one One-to-one Many-to-many Multicast-to-unicast

3. Which of the following is true regarding SNAT? a. b. c. d.

SNAT is bi-directional which means you translate traffic passing in both directions. SNAT allows you to use one externally routable IP address for many different nodes on an internal network. SNAT provides a secure mechanism when translating internal IP addresses to publicly routable external IP addresses and vice versa. SNAT is less secure than NAT.

4. Which of the following solutions can solve asymmetric routing? a. b. c. d.

Changing the default gateway of the BIG-IP system. Enabling the SNAT Automap feature. Changing the default gateway of the end-servers to point towards the BIG-IP system. Enabling the NAT Automap feature.

5. Which of the following addresses will SNAT Automap use as its first choice? a. b. c. d.

The non-floating self IP addresses on different VLANs. The non-floating self IP addresses on the egress VLAN. The egress VLAN is the VLAN on which traffic leaves the device. The floating self IP addresses on different VLANs. The floating self IP addresses on the egress VLAN. The egress VLAN is the VLAN on which traffic leaves the device.

349 349


Chapter Review: Answers 1. What are the risks of using NAT as a translation method? a. b. c. d.

It might cause an uneven load for the pool members. It might cause asymmetric routing. It might overflood the connection table. The BIG-IP system will accept any traffic as long as it matches the NAT address.

The correct answer is: d It is important to remember that NAT will intercept and process any traffic as long as it matches the NAT Address. And since NAT is bi-directional, it listens for traffic on both sides of the BIG-IP system. Therefore, creating a NAT translation to allow servers access to the Internet also means you create a translation from the Internet to the servers, creating a significant security risk. 2. Which of the following address mappings does SNAT use? a. b. c. d.

Many-to-one One-to-one Many-to-many Multicast-to-unicast

The correct answer is: a Source Network Address Translation or Secure Network Address Translation is a combination of Network Address Translation and Port Address Translation (NAPT). Unlike NAT, SNAT creates a many-to-one mapping and has many advantages over NAT. 3. Which of the following is true regarding SNAT? a. b. c. d.

SNAT is bi-directional which means you translate traffic passing in both directions. SNAT allows you to use one externally routable IP address for many different nodes in an internal network. SNAT provides a secure mechanism when translating internal IP address to publicly routable external IP addresses and vice versa. SNAT is less secure than NAT.

The correct answers are: b and c SNAT provides a secure mechanism for translating internal IP addresses to publicly routable external IP addresses and vice versa. This is because SNAT is unidirectional and only listens for traffic coming from a specified origin address and not traffic destined to the SNAT address. SNAT also allows you to use one externally routable IP address for many different nodes in an internal network. It solves this issue by using port translation to create the uniqueness necessary to translate multiple internal IP addresses. This enables you to save and conserve external IP addresses which are usually limited in most organisations.

350 350


4. Which of the following solutions can solve asymmetric routing? a. b. c. d.

Changing the default gateway of the BIG-IP system. Enabling the SNAT Automap feature. Changing the default gateway of the end-servers to point towards the BIG-IP system. Enabling NAT Automap feature.

The correct answers are: b and c When SNAT Automap is enabled, the BIG-IP system will translate both destination and source IP addresses. This mean that when the BIG-IP system forwards requests to the pool member, it will translate the virtual IP address to the pool member IP address and the client IP address to the BIG-IP address. When the pool member receives the request, it will send its response back to the BIG-IP system because the request has the source IP address of the BIG-IP system. When the BIG-IP system receives the response, it will reverse translate both the source and destination IP addresses. Sometimes, the end-servers need to be configured to use a default gateway other than the BIG-IP system. Since the client address (source address) is not translated when the request from the BIG-IP system is forwarded to the pool member, the pool member will then send the response back through its default gateway causing asymmetric routing. You can correct this by configuring the BIG-IP as the end-server’s default gateway. However, this might not be a solution suitable for your environment. 5. Which of the following addresses will SNAT Automap use as its first choice? a. b. c. d.

The non-floating self IP addresses on different VLANs. The non-floating self IP addresses on the egress VLAN. The egress VLAN is the VLAN on which traffic leaves the device. The floating self IP addresses on different VLANs. The floating self IP addresses on the egress VLAN. The egress VLAN is the VLAN on which traffic leaves the device.

The correct answer is: d When the BIG-IP selects a self IP address using the Automap function it uses the following order: 1.

The floating self IP addresses on the egress VLAN. The egress VLAN is the VLAN on which traffic leaves the device.

2.

The floating self IP addresses on different VLANs

3.

The non-floating self IP addresses on the egress VLAN. The egress VLAN is the VLAN on which traffic leaves the device.

4.

The non-floating self IP addresses on different VLANs

351 351


12. High Availability As we grow more and more dependent on the internet and the concept of always being online, the need for high availability grows with it. In most common deployments, BIG-IP systems are deployed in pairs which provide resilience against common, individual failures. This allows you to maintain the availability of the services supported by your application delivery infrastructure. When a BIG-IP system or network failure occurs and is detected a failover can be initiated automatically (manual failover is also possible). The partner system will then take over and continue to process traffic. Sometimes this works without the user experiencing any application level interruption to service or other indication that a failover has occurred. These pairings are commonly configured in an Active-Standby setup which means that one of the members is actively handling connections while the other is just standing by in case a failover occurs. This may seem like an expensive waste of resources but in many organisations the cost of an outage can be much higher.

352 352


This type of setup is called a Sync-Failover Device Group and was introduced in BIG-IP v11. There are also Sync-Only Device Groups which we’ll discuss in greater detail later in this chapter. The Active-Standby solution is also beneficial from a management perspective. When you are performing an upgrade, you can install the new software on the standby member and then failover traffic to it in order to verify if the upgrade was a success. If not, you can just failover back to the other member and restore service and traffic processing. The same approach can also be taken when making major configuration changes. You can also configure the BIG-IP systems to operate in an Active-Active setup which means that both devices will process traffic during normal operation. In the event of a failover the traffic being processed by the failed member will be transferred to the other member. With this setup it is very important to ensure that each member in the pair can handle the full traffic load. If it cannot handle the load, the member taking over the traffic will soon also fail and you will have a complete outage. We’ll discuss active-active setup later in this chapter.

Configuring a Sync-Failover Pair In order to design your BIG-IP systems in an HA-pair, there are a few settings that have to be configured to get everything up and running. We’ll go through all of them in the following sections.

Device Trust In order to create any device group (Sync-Failover or Sync-Only) we must first establish a trust between the two members. This is called a Device Trust. The device trust is established with certificate based authentication through the signing and exchanging of x509 certificates. Devices on the network that trust each other are considered to be part of a trust domain and these devices can synchronise configuration, exchange failover messages and failover to each other when necessary. The local device (the BIG-IP system which you are currently logged on to) is part of what is called the local trust domain.

The Different Types of Trust Authorities When you are creating a device trust, you will have two types of Device Trust Authorities to choose from. Either you configure your BIG-IP system as a Certificate Signing Authority or a Subordinate Non-Authority. You can also establish trust between Certificate Signing Authoritys which are known as Peer Authorities. The purposes of and differences between all of these are as follows: ▪

Certificate Signing Authority (CSA) - When a BIG-IP system is configured as a Certificate Signing Authority it will be able to sign x509 certificates for another BIG-IP system, which means that it will be able to add BIGIP devices to the local trust domain.

Peer Authorities - Whenever two CSA's have established a trust with each other they will serve as "Peers" to each other. If one BIG-IP were to fail, the other one will serve as backup and can still add new BIG-IP devices to a Device Trust.

353 353


Subordinate Non-Authority (SNA) - Since adding new BIG-IP devices to a device trust could be a potential security risk (especially if the BIG-IPs are located in a network zone with a lower security standard) you can join BIG-IP devices as Subordinate Non-Authorities. These devices will not be able to add new devices to a trust, they are only members of it.

In the WebGUI under Device Management > Device Trust you will have two tabs, either Peer List or Subordinate List. Whenever you choose to create the trust under the Peer List, both devices will be configured as a Certificate Signing Authority and they will each become a Peer to the other. If you create the trust under Subordinate List, the device that initiates the trust will become the Certificate Signing Authority while the other becomes the Subordinate Non-Authority.

Remember to keep the number of Certificate Signing Authorities high enough that you will still be able to add new devices to the trust even if some of them have gone offline.

The Importance of the BIG-IP Device Certificates The BIG-IP system relies on its own SSL Certificates for administrative tasks and inter-device communications. Therefore, if these certificates were to expire it would cause major issues to critical components. Some of which are explained in the following list: ▪

Device Service Clustering (DSC) - BIG-IP systems uses SSL Certificates to establish a trust with each other. This trust is used as a secure framework to synchronise configuration and perform failovers. If these certificates were to expire, the device trust would fail. If this happens during production hours, you will most likely experience a business impact.

The WebGUI - The WebGUI is also dependent on the SSL Certificates. If the certificate expires you will still be able to access the WebGUI so in terms of impact this is not a big issue.

BIG-IP DNS (formerly GTM) Communication - BIG-IP DNS uses the device certificates to establish a secure trust between itself and other BIG-IP systems. If this certificate expires then it will no longer be able to communicate with the device that contains the expired certificate. If this happens during production hours, you will most likely experience a business impact.

In order to prevent the certificate expiring, when initially configuring any BIG-IP system make sure you generate a new self-signed certificate with a 10-year expiration date (instead of the default 1 year) and change the common name (CN) from localhost.localhost to the hostname of the device.

354 354


If you have a device trust established with a certificate that is about to expire it is not the end of the world. However, you will have to plan a service window in order to renew it and re-establish the trust as this operation will cause an outage. When upgrading the BIG-IP system, the same device certificates will be used after the upgrade. Therefore, if you use a 10 year old certificate you will most likely outgrow the box before you have to renew the certificates.

Device Identity Each BIG-IP system has an x509 certificate installed which is used to identify themselves towards the other BIG-IP systems within a domain trust. The Device Identity are a combination of both the x509 certificate and additional information such as: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Device name Host name Platform serial number Platform MAC address Certificate name Subjects Expiration Certificate serial number Signature status

You can view the BIG-IP system’s device certificate under Device Management > Device Trust > Identity.

The Device Discovery Process in a Local Trust Domain When a BIG-IP system joins a local trust domain it goes through a process called device discovery. During the device discovery the BIG-IP system and the peers exchange their device properties and device connectivity information. Since all of the peers exchange their information, if a BIG-IP system joins a local trust domain that already contains two BIG-IP systems, the system joining the device trust will exchange its device properties with the two other BIG-IP systems that are already a part of the local trust domain. After the exchange, the BIG-IP system that joined the local trust domain will have three sets of device properties, its own device properties together with each peer’s properties. During the exchange the device connectivity information is also learned for each of the other devices.

Important When Configuring a Device Trust When configuring your device trust there are a few things that you need to consider: ▪

Only BIG-IP systems currently running v11.x and above can join a local trust domain.

You cannot manage device trusts through a device that is configured as a subordinate non-authority device. It can only be done through a certificate signing authority.

355 355


Before you add a new device to a local trust domain, make sure that you have already configured the config sync, failover and mirroring addresses.

Adding a Device to a Local Trust Domain Before you configure anything regarding the local trust domain, make sure that the BIG-IP system has a valid and working device certificate installed. In order to add a device to a local trust domain, do the following: 1. 2. 3. 4. 5. 6. 7.

Log on to the WebGUI using a web browser In the main tab go to Device Management > Device Trust > Local Domain Go to the Peer Authority Devices page or the Subordinate Non-Authority Devices page (depending on what setup you need) and click Add. Enter the management or self-IP address, admin user name and the password for the remote BIG-IP device Click Next Verify that the certificate, hostname an management IP address of the remote device are correct Click Finished

You can now add the new member of the local trust domain to a device group.

Resetting the Device Trust Sometimes you will need to reset the device trust in order to manage the certificate authority of a BIG-IP system that is part of a local trust domain. Some of the tasks include: ▪ ▪ ▪

Regenerate the self-signed certificate on a device Import a user-defined certificate authority Retain the current authority (this applies only to certificate signing authorities)

In order to reset the device trust, do the following: 1. 2. 3. 4. 5. 6.

Log on to the WebGUI using your web browser In the main tab go to Device Management > Device Trust > Local Domain In the Trust Information area, click Reset Device Trust On the next page you get the option to choose the Certificate Signing Authority Once you have chosen the Authority Type, click Update Confirm your choice

The device trust will stay intact as long there are at least two devices available. In a HA-setup containing three devices you can remove one of the devices, renew the certificate and add it back to the trust as long as the remaining two devices are a Certificate Signing Authority.

Device Groups Once you have configured your device trust you can now assign the nodes to a Device Group. There are currently two types of device groups.

356 356


Sync-Only Device Group As the name implies devices assigned to a Sync-Only Device Group only synchronise configuration data and will not failover objects. A Sync-Only Device Group can contain up to 32 devices. Devices in a trust domain can be a member of more than one Sync-Only Device Group and devices that are part of a Sync-Failover Device Group can simultaneously be a member of a Sync-Only Device Group.

Sync-Failover Device Group When you are configuring an HA pair you will most likely use a Sync-Failover Device Group. This device group will synchronise configuration data and also synchronise failover objects. Should the primary device becomes unavailable the secondary node can take its place and process traffic. A Sync-Failover Device Group can contain up to 8 devices.

Administrative Folders The BIG-IP system uses folders as containers for various BIG-IP configuration objects and all folders can contain subfolders. All of the configuration objects located on the BIG-IP system are contained in folders. Virtual servers, pools and self-IP addresses are all examples of these configuration objects. The good thing with this concept is that you can choose what kind of folders (configuration) you would like to synchronise to the other device. You can either perform a full synchronisation or a granular one where you specify exactly what folders are synchronised. For every administrative partition on a device the BIG-IP system will create an equivalent administrative folder where the configuration objects of that particular partition are stored. The folders are identical to simple UNIX directories where you have a root ( / ) that is the parent of all folders. You can very easily create new sub-folders on the system using tmsh. If you have a partition called Dev you can use tmsh to create a subfolder in that partition called App_1 which will result in the hierarchy /Dev/App_1/. If you create a virtual server called vs_http within this sub-folder, it will result in the following hierarchy: /Dev/App_1/vs_http.

Floating Self-IP Addresses There are two types of Self-IP addresses assigned to a BIG-IP system in a HA pair (aside from the management IP address,) non-floating Self-IP address and the floating Self-IP address. The floating Self-IP address is however linked to a traffic-group which means that it only resides on the device where the traffic group is assigned. As this address should always be available on the active system in a HA pair, it is a very good idea to configure the routers and servers within your network to use it as their next-hop/default gateway address, instead of a non-floating Self-IP address.

357 357


A traffic group is a collection of related configuration objects running on the BIGIP system and whenever a failover occurs, the objects within the traffic group will be transferred to the standby device in a HA pair in order to ensure that the traffic continues to be processed without causing a significant interruption. We cover traffic groups in greater detail later in this chapter. If you configure the non-floating Self-IP address, if that particular BIG-IP system is not assigned the traffic group it will most likely not handle the traffic being sent to it and you will experience an outage. When a failover occurs, the MAC address for the floating Self-IP address will change and gratuitous ARP is used to inform other systems that a change has occurred and their ARP tables should be updated. Since the systems are using the floating IP address they will just continue to send their requests to the same IP address but using a different MAC address. All of this will happen automatically.

MAC Masquerading This feature is designed to optimise traffic flow during failover events. It minimises Address Resolution Protocol (ARP) communication, improves reliability and failover speed in lossy networks and improves the interoperability with switches that are slow to respond to gratuitous ARP requests. MAC Masquerading works by assigning a virtual MAC address to any defined traffic-group on the BIG-IP system. This is a unique address that will float (along with a floating Self-IP address) between BIG-IP devices and be active on the device where the traffic-group is assigned. You configure MAC Masquerading as follows: 1. 2. 3. 4. 5.

Log on to the WebGUI using a web browser In the main tab go to Device Management > Traffic Groups and click on the traffic-group you wish you to turn on MAC Masquerading for In the MAC Masquerade Address box, enter the virtual MAC Address When done, click Update Synchronise the configuration over to the peer BIG-IP device so that the MAC Masquerading address is synchronised to the other device Choosing a MAC Masquerade Address can be tough. However, F5 has created an AskF5 article that explains exactly how you can create a unique MAC Address. Please refer to the following article: K3523: Choosing a unique MAC address for MAC masquerade.

Synchronising the Configuration In order to ensure that the secondary device in the HA pair can take over the traffic that has been failed over, we need to make sure that it's configuration is the same as the primary device's. We do this by synchronising the configuration. There are two types of configuration file: /config/bigip.conf and /config/bigip_base.conf.

358 358


/config/bigip.conf contains all of the settings that should be identical between BIG-IP systems in a HA pair such as iRules, virtual servers, pools, NATs, SNATs, nodes etc. All of the settings in the bigip.conf file are synchronised between the systems. /config/bigip_base.conf contains the settings that are unique to each device such as network settings, VLANS and non-floating IP addresses. These are not synchronised between the systems. The general guideline is that the bigip_base.conf file is device specific but there are configuration settings within the bigip_base.conf file that is actually synced between devices in an HA setup. For instance, the SNMP Access Allowed Addresses (the addresses that are allowed to retrieve SNMP data from the BIGIP device) is synchronised, while the Contact Information and Machine Location, which is also part of the SNMP configuration, is not. If the devices are part of a sync-failover device group and if connection/persistence mirroring is enabled, the active device will also share its connection and persistence table with the standby in order to create a swift and uninterruptable failover. The configuration synchronisation can not only be used to ensure systems have the same configuration, it can also be used to restore a system’s configuration.

The CMI Communication Channel in Detail As we previously mentioned, the BIG-IP systems in a device trust establishes a trust relationship using SSL certificates. When all of the components (device trust, device group, IP addresses) are defined and configured the device group members will establish a communication channel to enable device group communication and synchronisation. The centralised management infrastructure (CMI) communication channel allows the mcpd process that runs on each BIG-IP system to exchange Master Control Program (MCP) messages and commit ID updates to verify which device has the latest configuration and should be the one synchronising its configuration. The Master Control Program (mcpd) is one of the most important processes running on the BIG-IP system. Some of its primary functions are to receive and process configuration change requests from MCP clients and validate configuration changes. When this process is unavailable the BIG-IP system will experience the following impact: ▪ ▪ ▪

No traffic management functionality No system status can be retrieved or updated It cannot perform configuration changes

Other services also rely on the mcpd process and will be greatly affected if this process is unavailable. When the BIG-IP system establishes a connection with a device group member is uses the following certificates:

359 359


File Name /config/ssl/ssl.crt/dtdi.crt

GUI location Device Management > Device Trust > Identity

/config/ssl/ssl.key/dtdi.key

N/A

/config/ssl/ssl.crt/dtca.crt

Device Management > Device Trust > Local Domain N/A

/config/ssl/ssl.key/dtca.key

Description The dtdi.crt is the identity certificate that is used by a device to validate its identity with another device. The dtdi.key is the corresponding key file used by a device to validate its identity with another device. The dtca.crt is the CA root certificate for the trust network. The dtca.key is the CA root key for the trust network.

The device group members establish a communication channel as follows: 1. 2. 3.

The local mcpd process will connect to the local Traffic Management Microkernal (TMM) process using port 6699 The local TMM on the BIG-IP system then establishes a secure connection to the peer TMM process using the SSL certificate /config/ssl/ssl.crt/dtca.crt. It connects to the ConfigSync IP address using TCP port 4353 The peer TMM will translate the TCP port 4353 to port 6699 and pass the connection to the peer mcpd process

When all of the connections are established, a full mesh has been created between the systems in the device group. When a device fails to establish a connection the mcpd process will try and re-establish the connection every 5 seconds. The whole process is explained in the following diagram:

ConfigSync Operation in Detail ConfigSync operation is dependent on the CMI communication channel. The BIG-IP system uses commit IDs to determine which of the device group members has the most recent configuration. The configuration is transferred to peer devices as an MCP transaction. The whole process works as follows:

360 360


1. 2. 3. 4. 5. 6. 7. 8.

The BIG-IP administrator updates the configuration of a BIG-IP system The configuration changes are communicated to the local mcpd process This will update the commit ID on the local device which is then transferred to the peer device over the CMI communication channel informing the peer device that there is a configuration difference The BIG-IP administrator initiates a configuration sync using either the WebGUI or tmsh The mcpd process sends the new configuration and commit ID to the local TMM process The local TMM process on the device sends the configuration and commit ID to the remote TMM process over the CMI communication channel The remote TMM process translates the port 4353 to port 6699 and connects to its mcpd process The remote mcpd process loads the new configuration into memory and writes to the configuration files located on the device

Determine the State of a System When you are logged on to one of the devices in a HA pair you must sometime determine if you are logged on to the active or standby device. For example, when upgrading the software on systems in a HA pair you should always upgrade the standby system first. A quick way to determine the current state of a device is to view the upper left corner in the WebGUI. In an HA pair configured using the active/standby design one device will have the status Active and the other will have the status Standby:

You can also determine the state of the device at the CLI by observing the prompt. For example:

admin@bigip01:Active:In Sync] ~ # The statistics on each system will also be different as they are based upon that device's metrics alone and not the aggregate values of the HA pair.

Force to Standby Mode There are times you may need to force the active device into standby mode, when performing upgrades for instance. When performing an upgrade you should always upgrade the standby device first. Once the upgrade is complete you will have to force the active device into standby mode in order to direct traffic to the newly upgraded device and verify the upgrade. When the upgrade has been verified then the upgrade can be performed on the remaining device. In order to force an active device into standby mode, use one of the following methods:

361 361


WebGUI – Method 1 1. 2. 3. 4.

Log on to the WebGUI using an administrative account Go to Device Management > Traffic Groups > Select the traffic-group you would like to fail over. Click Force to Standby In the next prompt, click Force to Standby again

WebGUI – Method 2 1. 2. 3.

Log on to the WebGUI using an administrative account Go to Device Management > Devices > Click on the active device Scroll down to the bottom and click on Force to Standby

WebGUI – Method 3 1. 2. 3.

Log on to the WebGUI using an administrative account In the upper right corner, click on the current redundancy state text of ONLINE or ACTIVE Depending on which text you click on, a different page will be presented. In either case, at the bottom both of the pages have a Force to Standby button, click it

CLI - tmsh 1. 2.

Log on to the CLI using an SSH client of your choice When logged into the CLI, issue the following command:

tmsh run /sys failover standby

Traffic Groups Previously we mentioned that during a failover, the floating self-IP address is transferred to the standby device in a HA pair. In fact, it is not the floating IP address itself that is being failed over. The floating self-IP address is assigned to a traffic group and during a failover it is the traffic group that is being failed over. A traffic group is a collection of related configuration objects running on the BIG-IP system. These objects are connected to a particular application and whenever a failover occurs the objects within the traffic group will be transferred to the standby device in a HA pair in order for the traffic to continue to be processed without causing a significant interruption. Therefore, you can say that traffic groups are floating objects because they are only present on an active device. When the traffic has been transferred to the standby device it will change state to Active. A traffic group can only contain certain types of configuration objects. For instance, the floating self IP addresses and virtual addresses. Virtual Addresses have their own configuration section and are automatically created when you create virtual servers. These can be found under Local Traffic > Virtual Servers > Virtual Address List.

362 362


When the active device experiences an outage and has to fail over, the traffic group will be transferred to the next available device in the device group. By default, the BIG-IP system will select the device with the lowest workload whenever a failover occurs. However, you can select which device you would prefer to assume control of the traffic group.

The Default Traffic Groups on a BIG-IP System When you are configuring the BIG-IP system for the first time using the Setup Utility or when you are upgrading your system from a previous version, the BIG-IP system will create two default traffic groups. These are described below: ▪

traffic-group-1 – This traffic group contains all of the floating objects that should be transferred to the standby device during a failover. These objects include floating self-IP addresses, virtual servers, iApp applications, NAT and SNAT listeners and VLANs.

traffic-group-local-only – This traffic group contains all of the non-floating objects such as self-IP addresses for VLANs. Since this traffic group is not part of any device group, these objects will never failover to the standby device.

363 363


Traffic Group Failover Methods For each traffic group you have the ability to specify which device should be chosen in the event of a failover. This setting is dependent on the current load of each device in the device group. The settings that are available are described in the next section.

Load Aware Failover The default failover method is called Load Aware and its main goal is to make sure that the load on all devices in a device group is as even as possible. This method takes configurable values and calculates what is known as the Device Utilisation Score. The configurable values are: ▪

HA Load Factor - This is configured under each traffic group and it specifies a value of the load the trafficgroup presents the system relative to other traffic groups. It can be set from the range 1 to 1000 where the highest value is the traffic group that is expected to receive the most traffic. For instance, if we have three traffic-groups named TG_1, TG_2 and TG_3 and for TG_2 we expect traffic to be twice as much as TG_1 and for TG_3 we expect traffic to be thrice as much as TG_1. In this case we specify the following values: o TG_1 - HA Load Factor: 100 o TG_2 - HA Load Factor: 200 o TG_3 - HA Load Factor: 300 By default this setting is set to 1.

HA Capacity – This is configured under each device that is present in the device group and it specifies a value of the load that the device can handle. It can be set from the range 1 to 100,000 and the higher the value, the more load the device is expected to handle. Since you can build Sync/Failover groups containing BIG-IP devices with different performance capabilities, using the HA Capacity setting, you assign the highest value to the highest performing BIG-IP device. By default this value is set to 0, meaning it is turned off.

Using the HA load factor and the HA capacity, the BIG-IP system makes an overall assesment based upon the following values: ▪

The HA Capacity of the Local Device

HA Load Factor of Active Traffic Groups - This is the combined HA Load Factor for all traffic groups currently running on the BIG-IP device. For instance, if the system is currently running two traffic-groups, TG_1 and TG_2, where both are specified with an HA load factor of one (1), the combined HA load factor for the local active traffic groups is two (1+1).

HA Load Factor of Potential Active Traffic Groups – This is the combined HA Load Factor for traffic groups that could potentially be failed over to the BIG-IP device. For instance, if the local device is expected to take over the traffic group named TG_3, the HA Load Factor for TG_3 will be taken into consideration.

With these values, the BIG-IP calculates the Device Utilisation Score that forms the basis of which device that should take over traffic in case of a failover. It is calculated in the following way: (HA Load Factor of Active Traffic Groups) + (HA Load Factor of Potential Active Traffic Groups) / (HA Capacity of the Local Device)

364 364


The device with the lowest Device Utilisation Score will be the next potential device to take over traffic in case of a failover. In the following sections we present some calculation examples to further explain this concept.

How to Specify the HA Capacity Before you change the HA Capacity value, make sure that the device is part of a device group and that the device group contains three or more members. You might change this value because you have multiple types of hardware platform in a device group and you'd like to reflect the capacity and performance of each in the decision to select the best next-active device when a failover occurs. If all of the devices in the device group have the same capacity you can ignore this feature entirely. In order to specify the HA capacity do the following: 1. 2. 3. 4. 5.

Log on to the WebGUI using an administrative account. Go to Device Management > Devices. Click on the device you would like to change the capacity for. This will open a page displaying the properties of the device. On this page, in the HA Capacity field, specify a relative numeric value that represents the capacity in comparison to the other devices in the device group. Click Update.

If device 1 has half of the capacity of device 2 and a third of the capacity of device 3 in a device group you might configure the following values.

365 365


Device 1 (least capacity) 50

Device 2 (second highest capacity) 100

Device 3 (highest capacity) 150

Should device 3 fail the active role should pass to device 2 and not to device 1. When configuring HA Capacity you must specify a value for each device in the device group

How to Specify the HA Load Factor The HA Load Factor setting is also used to configure load-aware failover and its purpose is to define the application traffic load that a certain traffic group has in comparison to other traffic groups. This is used to establish and determine the resources an active traffic group requires to function. To configure the HA load factor please use the following instructions: 1. 2. 3. 4. 5.

Log on to the WebGUI using an administrative account Go to Device Management > Traffic Groups Click on the traffic group you would like to change the value for. This will open a page displaying the properties of the traffic group On this page, in the HA Load Factor field, specify a relative numeric value that represents the application load in comparison to the other traffic groups Click Update

366 366


The values you enter are highly dependent on the load each traffic group handles. Remember that they are relative to each other and if you have four traffic groups you can set values from highest load down to the least load in a similar manner to HA Capacity. When configuring HA Load Factor you must specify a value for each traffic group in the device group

Calculation Example In the following example we have decided to leave the HA Load Factor at its default value to simplify the calculation. We have three BIG-IP systems with five different traffic groups. The first two devices, BIGIP-1 and BIGIP-2 run on older hardware, have the same capacity and will be configured with an HA Capacity of 10. BIGIP-3 runs on newer hardware that has three times the capacity of BIGIP-1 and BIGIP-2 and will be given an HA Capacity of 30. As we mentioned earlier, when calculating the Device Utilisation Score, you combine the HA Load Factor from each Active Traffic Group and Potential Active Traffic Group and divide it by the HA Capacity specified on the device. The device with the lowest Device Utilisation Score will be the next potential device to take over traffic in case of a failover. The whole calculation is summarised below:

367 367


BIGIP-1 BIGIP-1 has an HA Capacity of 10 and is currently assigned Traffic-group-1 which has an HA Load Factor of 1 and can potentially be assigned the Traffic-group-2 which also has an HA Load Factor of 1. This results in a total HA Load Factor of 2. We then divide the total HA Load Factor by the HA Capacity resulting in a Device Utilisation Score of 0.2. (2/10 = 0.2) HA Capacity 10

Active Traffic Group Traffic-group-1

HA Load Factor 1

Potential Active Traffic Group Traffic-group-2

HA Load Factor 1

Device Utilisation Score 2/10 = 0.2

BIGIP-2 BIGIP-2 has an HA Capacity of 10 and is currently assigned Traffic-group-2 which has an HA Load Factor of 1 and can potentially be assigned the Traffic-group-3 which also have an HA Load Factor of 1. This results in a total HA Load Factor of 2. We then divide the total HA Load Factor by the HA Capacity resulting in a Device Utilisation Score of 0.2. (2/10 = 0.2) HA Capacity 10

Active Traffic Group Traffic-group-2

HA Load Factor 1

Potential Active Traffic Group Traffic-group-3

HA Load Factor 1

Device Utilisation Score 2/10 = 0.2

BIGIP-3 BIGIP-3 has an HA Capacity of 30 and is currently assigned Traffic-group-3, Traffic-group-4 and Traffic-group-5 which all have an HA Load Factor of 1 and can potentially be assigned the Traffic-group-1 which also has an HA Load Factor of 1. This results in total HA Load Factor of 4. We then divide the total HA Load Factor by the HA Capacity resulting in a Device Utilisation Score of 0.14. (4/30 = 0.14) HA Capacity 30

368 368

Active Traffic Group Traffic-group-3 Traffic-group-4 Traffic-group-5

HA Load Factor 1, 1 and 1

Potential Active Traffic Group Traffic-group-1

HA Load Factor 1

Device Utilisation Score 4/30 = 0.14


Even though BIGIP-3 already has three active traffic groups it has a higher HA capacity which gives it a lower device utilisation score. Since it has a lower utilisation score it will be the next available device in case BIGIP-1 or BIGIP-2 experience a failure. When you are configuring Load Aware failover, be very careful when modifying each value as it can have a significant impact if incorrectly configured.

Load Aware is the default Failover Method.

HA Order When all of the devices in a device group have identical capacity and performance you can specify an HA Order list where the BIG-IP system will failover to whatever device is next in the list. This is much easier to configure and understand than Load Aware Failover.

369 369


To configure HA Order: 1. 2. 3. 4. 5.

Log on to the WebGUI using an administrative account Go to Device Management > Traffic Groups In the Failover Method field select HA Order Add the devices to the list. You can switch the order by clicking the Up or Down button Cick Update

HA Groups HA Groups are a failover method that calculates an overall health score for a device in a device group based on the number of members that are currently available for any trunks, pools and clusters in the HA group. This availability is combined with a weight that you assign to each trunk, pool or cluster. The device that has the best overall score at any time will become or stay active. A cluster is an entity used with VIPRION systems. VIRPION systems have multiple slots in their chassis which work together to form a single and powerful unit. Therefore, only VIPRION systems that have the ability to use clusters in their HA Groups. The most common usage of the HA group feature is to ensure that a failover occurs whenever a specific number of trunk members become unavailable. Trunks are never synchronised between devices which means that the number of trunk members can be quite different on each device. Remember that the meaning of trunk is different in F5 terminology. A trunk is formed when you combine multiple physical interfaces into a single virtual one in order to increase its speed and reliability. This feature uses Link Aggregation Control Protocol and was covered in the 101 Application Delivery Fundamentals Study Guide. The HA group feature is disabled by default and when enabled only one HA group can be created per BIG-IP system. The major benefit of using HA Group is a feature called fast failover. Since the HA group determines the active device based on an overall health score it is much faster than traditional failure detection methods such as hardware or daemon failure. When activating the HA groups feature make sure that your redundant pair is configured to use network failover instead of hard-wired failover. Network failover must be configured in order for HA groups to work. The BIG-IP system uses the following criteria when calculating the overall HA score: ▪ ▪ ▪ ▪

370 370

The number of available members for each object (trunk, pools, clusters) The weight that has been assigned to each object in the HA group The active bonus value that you specify for the HA group Optional: The threshold that is specified for each object


Auto-Failback By default, when a BIG-IP system (configured in a Sync-Failover device group) experiences a failure it will failover its traffic to another device. That device will be the active device until it is manually forced to standby or experiences a failure itself and has to failover to another device in the device group. The device it fails over to depends entirely on the failover method specified in the configuration. When configuring failover, you have the option to configure what is known as auto-failback. In case of a failover to another device this feature will automatically fail back to the original device if and when its available to process traffic again. This happens even though there are other devices in the device group which are more eligible to process the traffic.

Auto-Failback is turned off by default

371 371


Auto-Failback Feature is Not Compatible With HA Group It is very important to remember is that Auto-Failback should not be used together with the HA Group feature. When a BIG-IP system is configured to use HA group as its failover method, the sod daemon is the process that determines which device should be active or standby and this is based on the HA score. The switch over daemon (sod) determines which device should become active or standby and provides a failover and restart capability through a high-availability table. Whenever this daemon is not running you will not be able to perform failover. Any log messages regarding the sod daemon are logged to the /var/log/ltm log. If a traffic group is configured with auto-failback and HA group is currently used, whenever a failover occurs the system will automatically failover back to the original device whenever it becomes available again. In the meantime, the sod daemon will use its calculations to determine if the default device should be the active unit based on the current HA score. If the HA score is lower than a peer device, it will cause another failover to that peer device. Since the original device is considered to be available it will again cause a failover back to itself because of the auto-failback feature. This will result in numerous (potentially endless) failovers which will cause major problems. You should choose between either technology depending on which feature best meets your needs.

Force to Standby Feature is Not Compatible with HA Group Another feature that is not compatible with HA Groups is the Force to Standby feature. As with the previous scenario, the sod daemon is the one determining the health status of a BIG-IP system. When you force a device to standby the active device will fail over to the standby device in the device group. The sod daemon is monitoring the health status of each BIG-IP system and the device that is taking over the traffic may not necessarily be the one with the highest health score. If this is the case, then sod will cause another failover to the device with the highest health score. In some scenarios you will be unable to fail over to the device you chose. Therefore, in order to use the force to standby function you should turn off HA Groups or set the device to a Forced Offline state.

Active-Active Redundancy In an active-active redundancy pair, both members process traffic simultaneously. This is different from an activestandby pair where only the active member processes traffic. It can offer the same failover functionality as an activestandby pair, but it is very important that the load on each device is below 50%. If the load on one device is at 60% and the load on the other is 50%, a failover will result in a single device trying to cope with a higher load than it has the capacity to. This is likely to cause a total failure. It can be hard to measure the load level on a device and it may differ depending on the time of day and other factors. All you can really do is calculate an estimate and try to keep the load below that margin. This is why most companies and organisations decide to run an active-standby solution. When configuring an active-active redundancy pair you must first create another traffic group. As we mentioned previously, a traffic group is a collection of related configuration objects running on the BIG-IP system.

372 372


These objects are connected to a particular application and whenever a failover occurs the objects within the traffic group will be transferred to one of the remaining devices in the device group. Let’s use an example. In this scenario we have two BIG-IP systems currently running in an active-standby redundancy pair. To process traffic on both BIG-IP systems at the same time, you will have to split traffic processing between the devices by creating another traffic group called: traffic-group-2. As soon as the traffic group is created you will immediately see that both devices will be marked active in both the WebGUI and the CLI. You can see in the WebGUI that bigip01 will have traffic-group-1 as active and bigip02 will have traffic-group-2 as active.

When the traffic group is created you will have to create an additional set of floating self-IPs that bigip02 will take ownership of. For each existing floating IP address and associated VLAN, create a second floating IP address. This should result in two floating IP addresses per VLAN. When we create the new floating IP addresses, under the field Traffic Group we select traffic-group-2.

373 373


Since traffic-group-2 is currently present on bigip02 it will now own the new floating IP address and process traffic destined for this IP address. In order to split the virtual server traffic between the devices we’ll also have to change the traffic group assignments for the virtual addresses under Local Traffic > Virtual Servers > Virtual Address List. Click on the address you would like to transfer to traffic-group-2. On this page, change the Traffic Group setting to traffic-group-2 as displayed in the following diagram:

374 374


When configuring an Active/Active pair, something that can be confusing is the routing and next hop devices. For incoming traffic, when we assign the virtual addresses to a specific traffic group, the device that takes ownership of that traffic group will by default, automatically ARP for those addresses. Then when the traffic group is failed over, the ownership of the virtual addresses is transferred to the new BIG-IP device and it will start ARPing for those as well. For return traffic, as we have mentioned, traffic should always be routed to the floating self-IP address since we only want to send traffic to the device currently being active. With two floating self-IP addresses, it is not clear which device currently owns the floating self-IP address since they can be split between the two devices or when a failure has occurred, be present on only one device. As with the virtual addresses, the self-IP address (both floating and nonfloating) are being ARPed by the device owning that particular self-IP. The only way to make sure that traffic ends up at the right device (right traffic group) is to use SNAT where the easiest solution is most likely to use SNAT Automap. The reason why you need to SNAT is that, even though the end-servers are using the BIG-IP system as their default gateway, having two floating self-IP addresses means that you will need two default gateways and it will be impossible for the end-server to know where the traffic should be returned. By using SNAT Automap, the client source IP address will be translated into the floating self-IP address present in the traffic group and when the traffic group is failed over, so is the floating self-IP address. When the floating self-IP address is transferred to the other BIG-IP device, it will start to ARP for that address and the return traffic will naturally find its way back to the active device. In the following diagrams, you can see that traffic is distributed among the devices in the pair and when a failover occurs the traffic is failed over to the remaining device:

375 375


376 376


As you can see in the previous diagrams, traffic is being processed on both devices where the clients’ communicate with the virtual addresses, existing in each traffic group. The clients’ requests are load balanced to the pool members and before the BIG-IP sends out the packet, it translates the source IP address to the floating self-IP address which also belongs to the traffic group. The pool member returns the traffic to the floating self-IP address, thus returning it back to the correct device (traffic group). Then, when a failover occurs, traffic-group-2 is transferred over to the other BIG-IP system which immediately starts to ARP for the addresses, 10.10.20.200 and 172.16.1.34. For the client accesing vs_http, the IP address is still the same and for the pool member it will still return the traffic to 172.16.1.34. The failover should be instant and cause very little interruption for the users.

377 377


Failover Options In order to maintain functionality, the BIG-IP system has failover options that can be configured to automatically restart daemons, failover to the standby device or reboot the system. These failover options include NIC (Network Interface Card) failures, network failures or failures within the BIG-IP system. All these failure options are the same no matter what redundancy design you are currently using. As soon as the criteria for the failure is met on an active device, it will perform the specified action.

HA Table All of the events that should cause a failover are stored in a HA table. The list contains the different features that can cause a failover and whether detection of them is enabled or not. It also contains the current state of the feature and what happens if the event is triggered. The daemons running on the device will constantly monitor the HA table in order to detect failures and if one is detected the specified action will be taken. For example, if you have configured a feature to cause a reboot, when the daemons notices that the feature is in a failed state the device will be rebooted. Currently these are the objects that can be monitored as high-availability features: â–Ş â–Ş â–Ş

Specific processes VLAN functionality The switchboard

The specific processes are the daemons running on the BIG-IP system such as mcpd, sod, tmm etc. and these are displayed in the following diagram:

378 378


The actions when a heartbeat failure occurs can be changed by clicking on the specific daemon and adjusting its values.

VLAN Failsafe LAN Failsafe is a feature designed to perform automatic actions based on the loss of network activity on a specific VLAN. The available actions include failing over to a different BIG-IP device. When configuring VLAN Failsafe you first specify the VLAN on which you want to monitor network traffic along with a timeout value and lastly what action the BIG-IP should take if it does not receive any network traffic within the timeout value.

379 379


As the timeout period is consumed the device will start confirming that the VLAN is indeed not receiving or transmitting any network traffic by performing the following: Half the Timeout has Expired - The BIG-IP device will initiate an Address Resolution Protocol (ARP) request for the oldest entry in the ARP table and initiate an ICMPv6 neighbor discovery probe if there are any entries in the BIG-IP IPv6 neighbor cache. Three Quarters of the Timeout has Expired - The BIG-IP device will initiate ARP requests for all IP addresses in the BIG-IP ARP table, initiate an ICMPv6 neighbor discovery probe if there are any entries in the BIG-IP IPv6 neighbor cache and also initiate a multicast ping to 224.0.0.1. If the BIG-IP system receives any successful reply (including ping responses) it will consider the VLAN to be functioning correctly. If the BIG-IP system does not receive any successful responses before the timeout period has been reached, it will trigger the specified action. These actions include: ▪ ▪ ▪ ▪

Reboot Restarting all services Failover and restarting the traffic manager (tmm) daemon Simple failover without restarting any services

VLAN Failsafe is disabled by default and should not be enabled on any VLAN until the BIG-IP system has a device available to test on that VLAN. An example of this would be a pool of members or a default gateway pool. If no devices exist on the VLAN the BIG-IP system will not receive responses to its various confirmation ARP and ping requests during periods of inactivity which will trigger the failure action unnecessarily. Like all VLAN settings, VLAN Failsafe is not synchronised within the device group

There are two ways that you can configure VLAN Failsafe both detailed next.

380 380


Using the High-Availability Screen 1. 2. 3. 4. 5. 6. 7.

On the navigation pane, go to System, hover over High-Availability and click on Fail-Safe Under the Fail-Safe menu choose VLANs to open the VLAN Fail-Safe screen In the upper right corner click Add In the VLAN List select a VLAN you would like to enable VLAN Fail-safe on In the Timeout box specify the amount of time the BIG-IP should not receive traffic for before marking the VLAN as down and performing the configured action (the default value is 90 seconds) In the Action list select the action that you would like the BIG-IP to perform when the timeout value has been reached Click Finished

Using the VLANs Screen 1. 2. 3. 4. 5. 6. 7.

On the navigation pane go to Network and click VLANs Click on the name of an existing VLAN or create a new one by clicking Create From the Configuration list select Advanced Turn on Fail-Safe by checking the box, this will display additional settings In the Fail-Safe Timeout box specify the amount of time the BIG-IP should not receive traffic for before marking the VLAN as down and performing the configured action (the default value is 90 seconds) In the Action list select the action that you would like the BIG-IP to perform when the timeout value has been reached Click Update or Finished

Gateway Failsafe The Gateway Failsafe feature is used for BIG-IP systems that have either multiple links to the Internet or more than one WAN link. When this feature is enabled the BIG-IP system will use the members of a Gateway Pool as a default route next hop and use health monitors to verify pool member status. If the number of available members drops below a specified value the pool is considered failed and this will result in either a failover, restarting all services or a reboot depending on what settings you have specified.

Failover Detection As we know when a failure has occurred on the active device the standby device will automatically become the active member and continue to process the traffic. There are two ways BIG-IP systems can communicate with each other in order to exchange this information.

Device Group Communication Hardware Failover The Hardware Failover method uses a serial cable in order to establish a connection with the other device in the HA pair. It is limited to only 2 devices in a Sync-Failover device group. The serial cable sends a continuous voltage signal to the paired device and if this is lost the standby unit will assume the active role within a few seconds. The cable length is limited to around 50 feet (15m). If you have a longer cable you may experience unexpected failovers as the signal in the cable will be weaker.

381 381


If the signal is lost because of a cable failure this will result in both systems going active at the same time resulting in a split-brain situation. Therefore, when implementing hardware failover make sure the distance between the two systems isn't too far.

Network Failover BIG-IP devices that are part of either a Sync-Only or Sync-Failover device group communicate with each other over the network. The information they exchange is different between Sync-Only and Sync-Failover device groups. In a SyncFailover device group they not only exchange configuration but also connection and other state information. Network Failover is slower than Hardware Failover, but it removes the 50 feet (15m) limitation. The reason Network Failover is slower is because it should not react to temporary packet loss, so it takes longer to determine if the failure is real. Like the hardware failover method, if there is a loss of signal caused by a cable failure, both systems will assume the active role causing a split-brain situation. Network failover can be used in conjunction with hardware failover. In this scenario, a failover will only occur if both the hardware and network connection is lost. It is also possible to have multiple pairs of IP addresses on the pair device. When this is configured, all network connections have to fail in order for a failover to occur. In order for network failover to work both systems have to have the same VLAN configuration and since this is not synced between the HA pair devices, it has to be manually configured on both.

Network Communication In order for all features with an HA pair to work you will have to make sure that the devices can communicate with each other over specific ports. These features can be divided into three types: ▪

Synchronisation – In v10.x the synchronisation process is performed over TCP 443. In v11.x/v12.x however it has been changed to TCP 4353 which is also referred to as iQuery.

Mirroring – In order for connection tables and persistence tables to be mirrored to the other device you will have to make sure that the following ports are accessible on both devices: o BIG-IP 11.0.0 - 11.3.0: TCP 1028 o Beginning in BIG-IP 11.4.0: TCP port 1029 – 1043 - The BIG-IP system maintains a separate mirroring channel for each traffic group

Device Status – In order to exchange health status and determine the active and standby role between two BIG-IP systems, you will need to make sure that the devices can communicate with each other over UDP 1026. In v10 this setting was optional but in v11 is has become the default setting.

Stateful Failover When a failover occurs and the standby device goes active and processes traffic, you want the transition to be as smooth as possible for the clients. In order to achieve this, information is exchanged between the devices in the HA pair. The information includes:

382 382


Connection Information – The current status of network sessions

Persistence Information – The current persistence records that determines which pool member the client last spoke to

Synchronising connection and persistence information is also known as mirroring. Mirroring is an on-going communication between the devices in the device group as they constantly exchange their real-time connections and/or persistence records. Whenever a failover occurs, the standby device is prepared and ready to support active and persistent connections because this information has been copied over to it. This will make the transition from the active device to the standby device seamless and real-time communications such as FTP will just continue from where they were prior to the failover.

Connection Mirroring Connection Mirroring is disabled by default and is an option for each individual virtual server. This feature should only be activated for real-time traffic such as FTP and SSH because it adds quite a lot of overhead on the device because it needs to mirror the connection table between the devices in the device group. For instance, protocols like HTTP and DNS recover on their own since they are considered to be stateless. If there is a failover while you’re browsing a webpage, you will most likely only experience an outage if it happens during the same second that you are requesting the objects building up the page. In this case, you can simply just refresh the page and you will end up on the new BIGIP device and the connection state you had earlier on the previous device is not needed in order to have the same experience as before the failover. Due note, Connection Mirroring only refers to syncing the state of the connection itself. For instance, if the webpage contains a shopping cart and it’s necessary that you end up at the same server to maintain state information present on that particular server, then you need a completely different feature called Persistence Mirroring, which we cover in the next section.

Persistence Mirroring Persistence Mirroring does not add the same overhead as Connection Mirroring and should be activated in most cases. One scenario where it is unnecessary is when cookie persistence is used. Since the cookie is only stored on the client, whenever a failover occurs the client will still provide its cookie when it is re-establishing its connection. Persistence mirroring will make sure that when a failover has occurred, the client will still connect to the same pool member if the persistence record has not yet reached its timeout value. Persistence mirroring is disabled by default.

SNAT Mirroring As we discussed in the NAT/SNAT chapter, NAT only performs address (not port) translation. When a failover occurs, traffic will simply continue to be translated as before. In other words, mirroring is not needed for NAT whenever a standby device assumes the active role and starts to process the traffic. SNAT on the other hand translates both IP addresses and ports in order for multiple nodes to use the same SNAT. It is therefore necessary to mirror the connection table containing the IP address and port mappings used in order for the standby device to continue processing the same traffic and keep track of the sessions. This feature is called SNAT Mirroring and is a property of each SNAT or Virtual Server that is currently using SNAT.

383 383


Considerations Regarding Stateful Failover Even though F5 offers many solutions and features regarding stateful failover, failing over traffic from one BIG-IP device to another does not mean the user experience wont be affected. Complex proxy functionality such as SSL offloading and/or iRules use may rely upon state information which is not mirrored to the other devices in the failover group. For instance, before version TMOS v.12, stateful failover with SSL was not possible and even though this feature has been introduced in later versions it still has its limitations depending on what SSL synchronisation method you choose. In regard to iRules, these are only applied and run once, meaning that subsequent requests for the same session will not be run through the iRule. The first packet of a session and the state information is only kept in memory of the current BIG-IP device and is not synced to the other devices in the failover group. This means that the iRules may not perform their intended function on existing connections after the failover. With that said, when configuring stateful failover you will most likely minimise the impact, but cannot be entirely sure that the user experience is unaffected. Therefore, you should always think twice before shifting traffic between BIG-IP devices.

How to Configure Stateful Failover To configure your BIG-IP devices to perform a stateful failover multiple steps are required, as follows: ▪ ▪ ▪ ▪

Specifying a local self IP address for connection mirroring (required) Enabling connection mirroring on a virtual server Enabling connection mirroring on a SNAT Enabling persistence mirroring on a persistence profile

All of these steps are covered in detail in the following sections.

Specifying an IP Address for Connection Mirroring When you initially configure your HA pair, you specify a local self-IP address which other devices in your device-group use when mirroring their connections. If you have not configured an IP address for connection mirroring this will have to be done locally for each device in the device group. In order to do so: 1. 2. 3. 4. 5.

6. 7.

Log on to the device on which you would like to configure connection mirroring On the main tab go to Device Management > Devices In the list of device objects click on the device that you are currently logged on to From the Device Connectivity menu choose Mirroring For the Primary Local Mirror Address you can either retain the displayed IP address or you can select another one from the list. The recommended IP address is either the self IP of the HA VLAN or the Internal VLAN For the Secondary Local Mirror Address retain the setting None or select an address from the list. If the primary address is unavailable the secondary local mirror address will be used Click Update

384 384


Enabling Connection Mirroring on a Virtual Server In order to synchronise the connections of virtual servers you need to modify the configuration of each as follows: 1. 2. 3. 4. 5.

On the main tab, click Local Traffic > Virtual Servers In the list of all virtual servers click on the virtual server that you would like to enable connection mirroring for From the Configuration list select Advanced Go to the Connection Mirroring setting and select the check box Click Update

Enabling Connection Mirroring for SNAT Connections In order to synchronise the connections of SNAT objects you need to modify the configuration of each as follows: 1. 2. 3. 4.

On the main tab click Local Traffic > Address Translation > SNAT List Click on the SNAT object that you would like to enable connection mirroring for Go to the Stateful Failover Mirroring setting and select the check box Click Update

Enabling Mirroring of Persistence Records In order to enable synchronisation of persistence records between BIG-IP devices you need to modify the configuration of each individual persistence profile as follows: 1. 2. 3. 4.

On the main tab click Local Traffic > Profiles > Persistence In the list of persistence profiles select the persistence profile you would like to enable mirroring for Go to the Mirror Persistence setting and select the check box Click Update

Lab Exercises: High Availability Exercise 9.1 – Adding a Secondary BIG-IP Device Exercise Summary In order to proceed with the High-Availability lab exercises we need to add a secondary BIG-IP device to build the device service clustering with. Therefore, in this exercise we'll add a secondary BIG-IP device to the VMware Workstation Player as we did previously when initially building the lab environment.

Exercise Prerequisites Before you start this lab exercise make sure you have: â–Ş

Successfully installed VMware Workstation Player

Importing the F5 BIG-IP Virtual Machine Into VMware Workstation Player 1. 2.

Start VMware Workstation Player You should be presented with a licensing screen. Simply select Non-Commercial use only and you will arrive at the Welcome to VMware Workstation 12 Player screen

385 385


3.

Once you are at the welcome screen, click on the Player tab and select File > Open

386 386


4. 5.

Navigate to the location where you saved the OVA files and select BIGIP-12.1.2.0.0.249.ALL-scsi This will launch the Import Virtual Machine wizard. Here you can rename the Virtual Machine and select where you want to store it. Make sure that the location you choose to store it has enough disk space. Once you are done, click Import

6.

Next you will be shown the License Agreement for the Virtual Machine. Click Accept to continue

387 387


7. 8.

Now the Virtual Machine is being imported into VMware Workstation Player. This might take a while depending on the hardware you are using Once the import is complete, the virtual machine should end up in the library list

388 388


Editing the Virtual Machine Settings for the F5 BIG-IP Virtual Machine 1. 2. 3. 4. 5.

Start VMware Workstation Player You should be presented with the library screen Click on the virtual machine named BIGIP-12.1.2_HA_Member Click on Edit virtual machine settings Click on the network adapter at the top of the list

6.

For the first Network Adapter assign it the LAN Segment called MGMT

389 389


7.

For the second Network Adapter assign it the LAN Segment called External

390 390


8.

For the third Network Adapter assign it the LAN Segment called Internal

9.

Click OK to save the configuration for the virtual machine

Starting the F5 BIG-IP Virtual Machine 1. 2. 3. 4. 5. 6.

Start VMware Workstation Player You should be presented with the library screen Click on the virtual machine named BIGIP-12.1.2_HA_Member Click on Play virtual machine. This will start the virtual machine The screen will turn black and prompt the message: GRUB Loading Stage 2.. The startup of the BIG-IP might take up to 10 minutes

391 391


Exercise 9.2 – Configuring the Management IP Address Exercise Summary Since the existing BIG-IP device is configured with the default management IP address of 192.168.1.245, we need to configure a non-default management IP address on the new BIG-IP address in the pair. This is what we'll do in the following lab exercise: ▪ ▪

Launch the WebGUI and configure a new management IP address Confirm that we can access the device

Exercise Prerequisites Before you start this lab exercise make sure you have: ▪ ▪

Successfully installed VMware Workstation Player Successfully imported the new BIG-IP device into VMware Workstation Player

Changing the Management IP Address Using the F5 Management Port Setup Utility 1. 2.

3.

Once the new BIG-IP device successfully boots up you will be presented with the login prompt. Login to the BIG-IP system using the username root and the password default After you have successfully logged on enter the command config. This will launch the F5 Management Port Setup Utility

On the first page, you will be informed that you have launched the F5 Management Port Setup Utility and that you have the ability to add an IP address, netmask and default route to the management port of the BIG-IP system

392 392


4.

On the next page, you will be asked if you would like to configure the management port using DHCP, select No

5.

On the next page, you will be asked to enter the IP address of the management port, enter 192.168.1.246

393 393


6.

On the next page, you will be asked to enter the netmask, enter 255.255.255.0

7.

On the next page you will be asked if you would like to configure a default route for the management port, select No

394 394


8.

On the next page, you will be presented with a summary of your configuration. Please make sure it is configured as follows and click Yes

Verify the Connectivity to the New BIG-IP System 1.

Open up a browser session to https://192.168.1.246. You will be prompted with a certificate error but this is normal. The BIG-IP system is shipped with a self-signed certificate which can not be validated by the web browser. Accept the certificate, this will load up the logon screen.

395 395


Exercise 9.3 – License, Provisioning and Initial Setup of the New BIG-IP System Exercise Summary In this exercise we'll go through the Licensing, Provisioning and Initial Setup of the new BIG-IP system. These are the actions necessary to get your BIG-IP system up and running and you will learn the following: ▪ ▪ ▪ ▪

How to access your BIG-IP system using the WebGUI How to license your BIG-IP system How to provision your BIG-IP system How to create a baseline configuration using the Setup Utility

Exercise Prerequisites Before you start this lab exercise make sure you have: Network access to the BIG-IP system’s management port Obtained the BIG-IP system’s base registration key. How to do so is described in lab exercise Exercise 1.2. Access to the Internet

▪ ▪ ▪

Access the WebGUI via the Management Port 1. 2. 3. 4.

Open up a browser session to https://192.168.1.246 Log in to the BIG-IP system using the default username admin and password admin. When logging on to the BIG-IP system for the first time you should be presented with the Setup Utility Click Next to start When the Setup Utility starts it will immediately go to the License page. Click the Activate button to start the licensing process

License Your BIG-IP System 1.

Use the Base Registration Key in order to generate a dossier. If the base registration is prepopulated, then follow the instruction present in 1a. If the base registration key is not prepopulated, then follow the instruction present in 1b a. If your Base Registration Key is prepopulated select activation method Manual and click Next

Setup Utility Setup Utility > License General Properties Activation Method When done, click

396 396

Manual Next


b.

If your Base Registration Key is not prepopulated, enter the following values:

Setup Utility Setup Utility > License General Properties Base Registration Key Add-On Registration Key List Activation Method When done, click Next 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Enter the base registration key you obtained in Exercise 1.2. Leave blank Manual

Make sure that Manual Method is set to Download/Upload File In the Step 1: Dossier area click Click Here to Download Dossier File. Save the dossier.do file on your computer In the Step 2: Licensing Server area click Click here to access F5 Licensing Server. This will launch a new web browser session to the F5 Licensing Server When you are at the Activate F5 Product web page under Select Your Dossier File click on browse Browse to the dossier.do file you just downloaded and click Open When done click Next On the Accept User Legal Agreement page check the I have read and agree to the terms of this license box When done, click Next On the next page click Download license. Save the license.txt file on your computer Go back to your web browser session that is connected to the BIG-IP system’s WebGUI In the Step 3: License area click Browse and browse to the license.txt file. Select the license.txt file and click Open When done, click Next You will be prompted with a white box stating, “BIG-IP system configuration has changed”. Once it is done click Continue and you will be presented with the Resource Provisioning page

Provisioning Your BIG-IP System 1.

On the Resource Provisioning page provision your BIG-IP system using the following settings:

Setup Utility Setup Utility > Resource Provisioning Module Management (MGMT) Small Local Traffic (LTM) Nominal When done, click Next

397 397


Your BIG-IP system may produce a warning message stating that certain system daemons may restart, or the system may reboot causing your web browser session to wait up to several minutes. This is normal when modifying the resource provisioning of the BIG-IP system.

Configuring the Device Certificates 1.

Next you will be presented with the Device Certificates page. Keep the default values and move on to the next page by clicking Next.

Configuring the Platform Settings 1.

On the Platform page, configure your BIG-IP system using the following settings:

Setup Utility Setup Utility > Platform General Properties Management Port Configuration Management Port

Host Name Time Zone User Administration Root Account

Password: f5training Confirm: f5training Password: f5training Confirm: f5training Enabled * All Addresses

Admin Account SSH Access SSH IP Allow When done, click

Manual IP Address [/prefix]: 192.168.1.246 Network Mask: 255.255.255.0 Management Route: Leave Blank bigip2.f5lab.com Select the time zone appropriate for your location

Next

You will be presented with a notice that you have changed the password and therefore have to login to the device again. 2.

398 398

Log back into the BIG-IP system using the admin account with password f5training. Once logged in you will be redirected to the Setup Utility > Network page


Performing the Standard Network Configuration 1. 2.

On the Setup Utility > Network page under the* Standard Network Configuration* click Next On the Setup Utility > Redundancy page ensure that it contains the following settings:

Setup Utility Setup Utility > Redundancy Redundant Device Wizard Options Config Sync High Availability When done, click 3.

Check the box Display configuration synchronization options Check the box Display failover and mirroring options Select Network for the Failover Method Next

Next, we’ll configure the VLANs and we’ll start with the Internal Network configuration. We’ll assign the VLANs self-IP address, netmask and network interface. On the Setup Utility > VLANs page enter the following settings:

Setup Utility Setup Utility > VLANs Internal Network Configuration Self IP

Floating IP Internal VLAN Configuration VLAN Name VLAN Tag ID Select the following VLAN Interface and Tagging.

IP Address [/prefix]: 172.16.1.32 Network Mask: 255.255.0.0 Port Lockdown: Allow Default Address: 172.16.1.33 Port Lockdown: Allow Default internal auto VLAN Interfaces: 1.2 Tagging: Untagged

When done, click Add This should result in the following configuration: Interfaces 1.2 (untagged) When done, click Next 4.

399 399

Next, we’ll configure the VLAN for the External Network configuration. On this page, enter the following settings:


Setup Utility Setup Utility > VLANs External Network Configuration Self IP

Default Gateway Floating IP External VLAN Configuration VLAN Name VLAN Tag ID Select the following VLAN Interface and Tagging.

IP Address [/prefix]: 10.10.1.32 Network Mask: 255.255.0.0 Port Lockdown: Allow None Leave Blank Address: 10.10.1.33 Port Lockdown: Allow None external auto VLAN Interfaces: 1.1 Tagging: Untagged

When done, click Add This should result in the following configuration: Interfaces 1.1 (untagged) When done, click Next 5.

Next, we’ll configure the High Availability Network Configuration. For the High Availability communication, we’ll use the Internal VLAN. On the Setup Utility > VLANs page enter the following settings:

Setup Utility Setup Utility > VLANs High Availability Network Configuration High Availability VLAN Click the Select existing VLAN button Select VLAN internal When done, click Next 6. 7. 8.

400 400

On the next page we’ll be asked to configure NTP. This is not necessary for the lab exercises. Skip to the next page by clicking, Next On the next page we’ll be asked to configure DNS. This is not necessary for the lab exercises. Skip to the next page by clicking, Next Next, we’ll configure the local address used by ConfigSync. On the Setup Utility > ConfigSync page enter the following settings:


Setup Utility Setup Utility > ConfigSync ConfigSync Configuration Local Address When done, click 9.

Next we’ll configure the failover configuration. On the Setup Utility > Failover page, use the default settings specified in the following table:

Setup Utility Setup Utility > Failover Failover Unicast Configuration Local Address | Port | VLAN Failover Multicast Configuration Use Failover Multicast Address When done, click 10.

172.16.1.32 192.168.1.246

1026 1026

internal Management Address

Unchecked (Disabled) Next

Next, we’ll configure the mirroring configuration. On the Setup Utility > Mirroring page use the default settings specified in the following table:

Setup Utility Setup Utility > Mirroring Mirroring Configuration Primary Local Mirror Address Secondary Local Mirror Address When done, click 11.

172.16.1.32 (internal) Next

172.16.1.32 None Next

Next, we’ll finish the Setup Utility as we’ll configure the BIG-IP system in a redundant high availability pair in the next lab exercise. Therefore, on the Setup Utility > Active/Standby Pair page, under Advanced Device Management Configuration click Finished

Once you are done with the Setup Utility you be redirected to the Statistics page and at the top of the page you will be presented with the message Setup Utility Complete. This is shown in the following diagram:

12.

401 401

Log out from the BIG-IP WebGUI by clicking the Log out button and close down your web browser


Exercise 9.4 – Creating an Active/Standby High-Availability Setup Exercise Summary Since we now have two BIG-IP devices we can establish a device trust and create an Active/Standby High-Availability Setup. In this lab exercise we’ll perform the following: ▪ ▪ ▪ ▪

Establish a device trust between the two BIG-IP devices Create a Sync-Failover Device Group which both devices will be a member of Perform a configuration sync Fail over traffic from one BIG-IP device to the other

Exercise Prerequisites Before you start this lab exercise make sure you have: ▪ ▪ ▪

Successfully installed VMware Workstation Player Successfully imported the new BIG-IP device into VMware Workstation Player Successfully performed the Initial Setup on two BIG-IP devices

Establish the Device Trust 1. 2. 3. 4.

Open up a browser session to https://192.168.1.245 Log in to the BIG-IP system using the username admin and password f5training Navigate to Device Management > Device Trust > Peer List and click Add On the Device Trust : Peer List page, enter the following:

Device Management Device Management > Device Trust : Peer List Remote Device Credentials Device IP Address 192.168.1.246 Administrator Username admin Administrator Password f5training When done, click Retrieve Device Information 5.

402 402

If you successfully retrieve the Device Information you should be presented with the Device Certificate. Click Finished in order to create the Device Trust


Creating the Sync/Failover Device Group Now we have established a trust between the two devices we need to add them to a Sync/Failover Device Group in order for them to failover traffic-group-1 between themselves. 1. 2.

Navigate to Device Management > Device Groups and click Create On the New Device Group page, enter the following:

Device Management Device Management > Device Groups > New Device Group‌ General Properties Name device-group-1 Group Type Sync-Failover Configuration Members Includes: bigip1.f5lab.com bigip2.f5lab.com Network Failover Enabled (Checked) When done, click Finished

403 403

Available:


Once the Sync/Failover Device group is created you will receive the following notification under the Current ConfigSync State:

In order for either device to start operating in the device group they need to perform an initial sync to make sure the same configuration is running on both devices.

Performing an ConfigSync ConfigSync is something that you will perform very often and in some configurations it will happen automatically (if you have enabled Automatic Sync under the device group). As we've said, there must be an Initial Sync in order for the device group to start functioning correctly. To perform this sync: 1. 2.

Navigate to Device Management > Overview or click on the Config Sync State Text Awaiting Initial Sync In the Devices list you have all of the devices that you can perform a ConfigSync with. It displays the current HA Status, the name of the device, the current Sync Status and the Configuration Time which displays when the last configuration change was performed. In the Devices list you select a BIG-IP system you would like to sync with and then select whether it should sync its configuration to the group or sync the group configuration to the device. Click on bigip1.f5lab.com in the Devices list. When working with ConfigSync in production environments, be very careful which of these options you choose. You can potentially sync an old configuration that will overwrite the new one that you recently created.

3.

Select the Sync Device to Group option and click Sync

404 404


When done, the results should look like the following:

405 405


Failing Over the Traffic From the Active to the Standby Device If the previous lab exercises have been successful you should now have two BIG-IP devices configured in an Active/Standby High Availability pair. In this exercise, we'll fail over traffic from the Active to the Standby Device. There are multiple ways of failing over the traffic but we'll do this by forcing a device into Standby Mode. 1. 2.

3. 4. 5. 6.

Open up a browser session to the currently active unit and log in to the BIG-IP system using the username admin and password f5training Navigate to Device Management > Devices > The currently active unit or click on the Current Redundancy State text ONLINE

Go down to the bottom of the page and click on Force to Standby A message stating Force this Device to standby? will be shown, click OK After around a second the Current Redundancy State will change from ONLINE (ACTIVE) to ONLINE (STANDBY) Log on to the other BIG-IP device. What is the Current Redundancy State? It should be ONLINE (ACTIVE)

Chapter Summary ▪

As we grow more and more dependent on the internet and the concept of always being online, the need for high availability grows with it. In most common deployments, BIG-IP systems are deployed in pairs which provide resilience against common, individual failures. This allows you to maintain the availability of the services supported by your application delivery infrastructure.

These pairings are commonly configured in an Active-Standby setup which means that one of the members is actively handling connections while the other is just standing by in case a failover occurs. This may seem like an expensive waste of resources but in many organisations the cost of an outage can be much higher.

In order to create any device group (Sync-Failover or Sync-Only) we must first establish a trust between the two members. This is called a Device Trust. The device trust is established with certificate based authentication through the signing and exchanging of x509 certificates.

Once you have configured your device trust you can now assign the nodes to a Device Group. There are currently two types of device group.

There are two types of Self-IP addresses assigned to a BIG-IP system in a HA pair (aside from the management IP address,) non-floating and floating.

A non-floating Self-IP address is assigned to each device in the HA pair and these will always reside on the same device.

A floating Self-IP address is linked to a traffic-group which means that it only resides on the device where the traffic group is assigned.

406 406


A traffic group is a collection of related configuration objects running on the BIG-IP system and whenever a failover occurs, the objects within the traffic group will be transferred to the standby device in a HA pair in order to ensure that the traffic continues to be processed without causing a significant interruption.

The BIG-IP system also has the ability to monitor network traffic going through a VLAN. This feature is called VLAN Failsafe. It works by listening on a specific VLAN and if no traffic is detected within half of the timeout period the BIG-IP system will attempt to generate traffic on its own. It does this by pinging known devices on the VLAN.

The Hardware Failover method uses a serial cable in order to establish a connection with the other device in the HA pair. It is limited to only 2 devices in a Sync-Failover device group. The serial cable sends a continuous voltage signal to the paired device and if this is lost the standby unit will assume the active role within a few seconds. The cable length is limited to around 50 feet (15m).

When a failover occurs and the standby device goes active and processes traffic, you want the transition to be as smooth as possible for the clients. In order to achieve this, information is exchanged between the devices in the HA pair. This includes Connection Information and Persistence Information. This is known as Stateful Failover and the technique used is known as Mirroring.

Chapter Review 1. How many devices can a Sync-Only Device Group contain? a. b. c. d.

32 devices 8 devices 16 devices 4 devices

2. Which of the following configuration files are synchronised to the other devices in a Sync-Only/SyncFailover group? a. b. c. d.

bigip_base.conf bigip_running.conf bigip.conf bigpipe.conf

3. Which of the following Traffic Group Failover Methods calculates an overall health score for a device in a device group using trunks, pools or clusters? a. b. c. d.

Load Aware Failover HA Order HA Load Factor HA Groups

407 407


4. Which of the following is considered a downside when using HA Groups as a Traffic Group Failover Method? a. b. c. d.

It may be slow with discovering a failover. You are unable to use the Force to Standby feature. It has a high risk of discovering false positives causing unnecessary failovers. You are unable to use the Auto-Failback feature.

5. When configuring Active-Active Redundancy, which of the following is important to take into consideration? a. b. c. d.

The load on each device has to be lower than 50% at a minimum. That both devices use the exact same hardware. That is has the exact same hostname. That the Device Group Communication is using Hardware Failover method.

6. On version 11.x/12.x, which TCP port is used to synchronise the configuration between BIG-IP devices? a. b. c. d.

TCP 22 TCP 6699 TCP 443 TCP 4353

7. True or False: Connection Mirroring should be turned on for all connections? a. b.

True False

408 408


Chapter Review: Answers 1. How many devices can a Sync-Only Device Group contain? a. b. c. d.

32 devices 8 devices 16 devices 4 devices

The correct answer is: a A sync-only device group can contain up to 32 devices. 2. Which of the following configuration files are synchronised to the other devices in a Sync-Only/SyncFailover group? a. b. c. d.

bigip_base.conf bigip_running.conf bigip.conf bigpipe.conf

The correct answer is: c The /config/bigip.conf contains all of the settings that should be identical on both BIG-IP systems such as iRules, virtual servers, pools, NATs, SNATs, nodes etc. All of the settings in the bigip.conf file are synchronised between the systems. 3. Which of the following Traffic Group Failover Methods calculates an overall health score for a device in a device group using trunks, pools or clusters? a. b. c. d.

Load Aware Failover HA Order HA Load Factor HA Groups

The correct answer is: d HA Groups are a failover method that calculates an overall health score for a device in a device group based on the number of members that are currently available for any trunks, pools and clusters in the HA group. This availability is combined with a weight that you assign to each trunk, pool or cluster. The device that has the best overall score at any time will become or stay active.

409 409


4. Which of the following is considered a downside when using HA Groups as a Traffic Group Failover Method? a. b. c. d.

It may be slow with discovering a failover. You are unable to use the Force to Standby feature. It has a high risk of discovering false positives causing unnecessary failovers. You are unable to use the Auto-Failback feature.

The correct answer is: b and d It is very important to remember is that Auto-Failback should not be used together with the HA Group feature. When a BIG-IP system is configured to use HA group as its failover method, the sod daemon is the process that determines which device should be active or standby and this is based on the HA score. If a traffic group is configured with auto-failback and HA group is currently used, whenever a failover occurs the system will automatically failover back to the original device whenever it becomes available again. In the meantime, the sod daemon will use its calculations to determine if the default device should be the active unit based on the current HA score. If the HA score is lower than a peer device, it will cause another failover to that peer device. Since the original device is considered to be available it will again cause a failover back to itself because of the auto-failback feature. Another feature that is not compatible with HA Groups is the Force to Standby feature. As with the previous scenario, the sod daemon is the one determining the health status of a BIG-IP system. When you force a device to standby the active device will fail over to the standby device in the device group. The sod daemon is monitoring the health status of each BIG-IP system and the device that is taking over the traffic may not necessarily be the one with the highest health score. If this is the case, then sod will cause another failover to the device with the highest health score. 5. When configuring Active-Active Redundancy, which of the following is important to take into consideration? a. b. c. d.

The load on each device has to be lower than 50% at a minimum. That both devices use the exact same hardware. That is has the exact same hostname. That the Device Group Communication is using Hardware Failover method.

The correct answer is: a It is very important that the load on each device is below 50%. If the load on one device is at 60% and the load on the other is 50%, a failover will result in a single device trying to cope with a higher load than it has the capacity to. This is likely to cause a total failure.

410 410


6. On version 11.x/12.x, which TCP port is used to synchronise the configuration between BIG-IP devices? a. b. c. d.

TCP 22 TCP 6699 TCP 443 TCP 4353

The correct answer is: d In v10.x the synchronisation process is performed over TCP 443. In v11.x/v12.x however it has been changed to TCP 4353 which is also referred to as iQuery. 7. True or False: Connection Mirroring should be turned on for all connections? a. b.

True False

The correct answer is: b Connection Mirroring is disabled by default and is an option for each individual virtual server. This feature should only be activated for real-time traffic such as FTP and SSH because it adds quite a lot of overhead on the device because it needs to mirror the connection table between the devices in the device group.

411 411


13. The Traffic Management Shell (tmsh) As mentioned earlier, the BIG-IP system can also be configured using the command line interface (CLI) using what is known as the traffic management shell or tmsh. tmsh is used for administering the device and performing specific BIG-IP operations. You can also view statistics and performance data about the device. You may prefer the CLI over the WebGUI as many other network devices are managed over CLI, therefore, making it more familiar to use. There are many differences between the WebGUI and tmsh. Some tasks can only be performed using tmsh and some are far easier to perform using the WebGUI. tmsh also offers the possibility of scripting where you write several tmsh commands that are utilised in a script. There is also a difference in regard to the speed of execution of the configuration changes. When performing changes in the WebGUI and clicking "Update", you can wait some time before you get back to the interface and can continue configuring the BIG-IP device, whereas with tmsh the changes are instant. Overall, both tools will provide you with all of the functionality you will need in order to perform the tasks necessary to administrate the BIG-IP system, and it all really comes down to personal preference.

Accessing the Traffic Management Shell (tmsh) First of all, in order to access tmsh, you will first need to verify your access. In order to do so, you can review your user account under System > Users > User List and clicking on the name of the user.

412 412


The CLI access is referred to as Terminal Access and there are three available options from which to choose: ▪ ▪ ▪

Disabled – No shell access. Advanced Shell – Will provide unrestricted access to the terminal. This will provide access to both the Linux bash shell and the Traffic Management Shell (tmsh). tmsh – This will only provide access to the Traffic Management Shell (tmsh).

These settings are configured for each user and the available options will vary depending on the different user roles. For example, the Administrator and Resource Administrator roles will have all three options, while the roles such as Operator or Manager will only be given access to tmsh or no access.

We’ll discuss User Roles later in this chapter.

If Advanced Shell is selected, you will immediately land in the bash shell under the /config directory which be indicated by the prompt: config # The traffic management shell can then be accessed using the command tmsh. When you have entered tmsh from bash you can go back by typing the command run util bash or quit. Just writing q is also possible as this is short for quit.

It is also possible to enter the tmsh commands directly from bash and still stay within the bash shell. However, you will lose the ability to tab the tmsh commands.

If a user is only configured with tmsh they will go directly to tmsh upon logon without the ability to enter the bash shell. Here is how the logon looks when logging on with Advanced Shell terminal access:

login as: user1 Using keyboard-interactive authentication. Password: Last login: Wed Dec 23 06:02:53 2015 from 10.200.15.10 [user1@bigip01:Active:In Sync] ~ # Here is how the logon looks when logging on with tmsh terminal access:

login as: user2 Using keyboard-interactive authentication. Password: Last login: Wed Dec 23 05:59:54 2015 from 10.200.15.10 user2@ (bigip01) (cfg-sync In Sync) (Active) (/Common)(tmos)#

413 413


Understanding the Hierarchical Structure of tmsh In tmsh, there is a hierarchical structure very much like the file system hierarchy in Linux. It is built upon the following structure: ▪

tmos – This is the highest level of the hierarchy which is often referred to as the root.

modules – Underneath the tmos we have the modules which are different depending on the BIG-IP version and the access to them all depend on the provisioning and licensing of the system. Examples of modules are gtm, ltm, asm, net, cm, sys.

submodules – Some modules will also contain sub-modules. Examples of these would be monitor and profiles which are sub-modules of ltm.

components – Components represent the actual configurable objects and are at the bottom of the hierarchy. Examples of these include node, pool, virtual (server) and self (IP). In order to utilise the commands of a module, you will first have to provision it. For example, in order to issue the tmsh commands referenced by the /ltm module you will need to first provision LTM.

414 414


Entering tmsh Commands If your user has Advanced Shell access you have two ways of entering tmsh commands. ▪

Enter the commands from the Linux bash prompt (config #) - For example, in order to display all of the virtual servers present on the BIG-IP system enter the following command:

config # tmsh list ltm virtual all ▪

Open the Traffic Management Shell first by typing tmsh and then enter the commands. – This starts tmsh in interactive shell mode and displays the shell’s root prompt:

user2@(bigip01) (cfg-sync In Sync) (Active) (/Common)(tmos)# If we want to issue the same command as previously, we would type the following:

(/Common)(tmos) # list ltm virtual all When you have launched tmsh in interactive mode, you will be able to navigate in the hierarchy and enter the commands. We’ll cover this in greater detail in the next sections of this chapter.

The tmsh Prompt In the tmsh prompt, you will be able to get a good understanding of the system’s overall health as the prompt contains valuable information. Using the following example, we can determine the following: user2@(bigip01) (cfg-sync In Sync) (Active) (/Common)(tmos)# ▪ ▪ ▪ ▪ ▪

user2 – The name of the logged on user. bigip01 – Hostname of the device. (cfgsync In Sync) – The device is configured in an HA pair and the configuration on the devices are the same. No sync is required. (Active) – The device is configured in an HA pair and this device has the active role. (/Common) – The partition that tmsh is currently pointing to.

Navigating the tmsh Hierarchy As we have mentioned previously, the tmsh is based on a modular structure in which you are able to navigate. It is important to remember how you move up and down in the hierarchy and between modules. In order to present some examples, the following diagram displays how to navigate between different modules, submodules and components:

415 415


Current prompt (tmos) # (tmos.ltm) # (tmos.ltm.virtual) # (tmos.sys.software)#

tmsh command ltm virtual /sys software /net

Resulting prompt (tmos.ltm) # (tmos.ltm.virtual) # (tmos.sys.software) # (tmos.net) #

The following diagram displays how you can navigate out of a lower level or even navigate out of tmsh entirely. You will go one step up further in the hierarchy until you finally reach the root where you exit tmsh. Current prompt (tmos.ltm.virtual) # (tmos.ltm) # (tmos) #

tmsh command exit exit quit

Resulting prompt (tmos.ltm) # (tmos) # config #

In order to perform a quicker method of the above navigation you can perform the following commands: Current prompt (tmos.ltm.virtual) # (tmos) #

tmsh command / quit

Resulting prompt (tmos) #| config #|

If the user is only configured with the Terminal Access of tmsh, when you enter the quit command, the session will be immediately terminated. It is also worth noting that you will return to the directory from which you originally entered the command tmsh. For instance, if you entered tmsh while you were in the /var directory you will be returned there when issuing the quit command.

Command Completion Feature One of the great advantages of entering tmsh instead of issuing the commands directly from bash is the Command Completion Feature. At any points of writing or editing a command in tmsh you can use the Tab button to autocomplete the current word or display the different possible options for completion. The list will only be displayed if the word you have written does not have a unique match. If you get multiple matches it will display the longest possible match of the word along with all of the other matches. For example, if you would type li and then press [Tab], the final result would be list. It would also add a trailing space for additional parameters to be entered. If tmsh displays nothing, it means that it did not find any matches. It is also possible to complete a word when moving the cursor in tmsh. For instance, if you have the command li ltm virtual and move back to li_ ltm virtual and press tab the command will look like this: list ltm virtual.

416 416


The command completion will also work at the object level where you can auto-complete names of the different objects on the BIG-IP system. For instance, we can write the following command:

(tmos)# list ltm virtual-address 10.10.15.1 [Tab] And the end result would be:

(tmos)# list ltm virtual-address 10.10.15.100

Perform Wildcard Searches in tmsh Tmsh supports the possibility of performing glob-based wildcard and regular expression (regex) searches. In our previous example, we used the command completion feature to auto-complete the virtual address of 10.10.15.100 but if you would like to see all of the virtual addresses that begins with 10.10.15.1 you could just add a star (*) and this would be presented. The entire command will be written as follows:

(tmos)# list ltm virtual-address 10.10.15.1* There are many different ways to use regex and glob to search for information via the CLI and you can read more about it using the following commands:

(tmos)# help regex (tmos)# help glob

Context-Sensitive Help Tmsh also has a feature known as context-sensitive help which will help a user on how they should complete a command. At any place in a command or a word, the administrator can type a question mark (?) and tmsh will provide a list of commands available that can complete the command. If the question mark is added in the middle of a word, it will work similarly to the auto-complete except it will only display the possible combinations without actually completing the command. This is displayed in the following output:

(tmos)# list ltm profile f? Components: fasthttp fastl4 fix ftp When the question mark is used between two commands or parameters, tmsh will display the commands that can be used for that specific component. This is displayed in the following example:

417 417


(tmos)# list ltm pool ? Options: all - Apply the command to all configuration items all-properties - Display all properties for the specified items non-default-properties - Display properties that have non-default values one-line - Display each configuration item on a single line recursive - Include sub-folders recursively |- Route command output to a filter Identifier: [object identifier] - Name of the pool Properties: “{� - Optional delimiter Just like the auto-completion feature, the context-sensitive help can also display the objects configured on the BIG-IP system. To give you an example:

(tmos)# list ltm pool h? Configuration Items: http_pool https_pool

Manual Pages Every command, module, sub-module and component in tmsh has a manual page (or man page). You can display the man pages by issuing the following command:

(tmos)# help [command] (tmos)# help [full path to component] For example, to display the man page for configuring a virtual server, issue the following command:

(tmos)# help /ltm virtual And if you would like to display the man page of the command modify then issue the following command:

(tmos)# help modify There is also a possibility to search the man pages for specific terms and topics. To do this, issue the following command:

(tmos)# help search [term or topic]

418 418


Command History Feature Every time you issue a successful command, it will be stored in the command history. The commands are displayed in the order in which they have been entered, and each command is identified using an entry ID which increases with each entered command. The higher entry ID the more recent it was entered. The command history can be accessed by issuing the command show /cli history and it can be entered from anywhere within tmsh. This will display the list of entered commands and to exit from this page type “q�. There are multiple commands you can use with the command history list, and these are displayed in the table below: Command cli show history cli ! [entry_id] cli !! cli ! [string]|

Function Displays a list of tmsh commands in the order of which they have been entered. After you have issued the show history command, it will re-run the command specified in the entry ID. Re-runs the previously entered command. Re-runs the command that began with the specified [string] value.

The tmsh Keyboard Map Feature In order to work more efficiently with tmsh, a special keyboard map feature has been added. Using specific key sequences such as Ctrl + W will cause the tmsh shell to perform certain actions. In the following table you will be able to find all of the different keyboard maps with their associated actions:

419 419


Key Sequences Ctrl + A Ctrl + B Ctrl + C Ctrl + D Ctrl + E Ctrl + F Ctrl + G Ctrl + H Ctrl + J Ctrl + K Ctrl + L Ctrl + M Ctrl + N Ctrl + P Ctrl + Q Ctrl + R Ctrl + S Ctrl + T Ctrl + U Ctrl + W Esc + B Esc + D Esc + F Esc + L Esc + N Esc + P Esc + U Esc + Backspace Backspace Delete Up Arrow Down Arrow

Action Moves the cursor to the beginning of the line. Moves the cursor to the left one character. Cancels the current command. Deletes the character under the cursor, or when the command line is empty, exits tmsh. Moves the cursor to the end of the line. Moves the cursor to the right one character. Clears all characters from the command line. Deletes the previous character. Enters a new line and runs the current command. Deletes all characters from the cursor to the end of the line. Clears the screen, repositions the prompt at the upper left, and leaves the current command intact. Enters a new line and runs the current command. Displays the next item in the command glob. Displays the previous item in the command glob. Resumes input. Clears the screen, repositions the prompt at upper left, and leaves the current command intact. Suspends input. Transposes the character under the cursor with the character to the left of the cursor. Deletes all characters before the cursor. Deletes the word before the cursor. Moves the cursor one word to the left. Deletes all characters from the cursor to the end of the current or next word. Moves the cursor one word to the right. Changes the word to the right and the word under the cursor to lowercase. Searches command glob search for the next item. Searches command glob search for the previous item. Changes the word to the right and the word under the cursor to uppercase. Deletes the word to the left of the cursor. Deletes the character to the left of the cursor. Deletes the character to the right of the cursor. Scrolls back through the command glob. Scrolls forward through the command glob.

Managing BIG-IP Configuration State and Files When you are configuring your BIG-IP device, it is very important to take into account how the underlying configuration is stored and what you need to do in order to save the configuration. As previously mentioned in this book, there are two different ways to configure the BIG-IP device, the WebGUI or tmsh. Depending on what method you are using, there are certain steps that need to be done in order to save the configuration.

420 420


Introduction to BIG-IP Configuration Files and Structure On the BIG-IP device, the configuration is actually stored in three different states. ▪

The text configuration – The text configuration contains all of the changes that have been saved by the BIGIP system. These files are the bigip.conf and the bigip_base.conf.

The running configuration – The running configuration contains all of the configuration that are stored in the text configuration and the changes that have been made since the last save. This configuration is running in the memory of the BIG-IP system, which means that if you have unsaved configuration on the BIG-IP system, if the system is shut down without being saved, you will lose all of the unsaved settings.

The binary configuration – Starting with BIG-IP 9.4.0 when the BIG-IP system first boots up, the mcpd process builds a binary configuration which creates the following two files: o /var/db/mcpd.bin o /var/db/mcpd.info

When using the WebGUI, your changes will be automatically saved to all three configuration entities for each time you save your configuration. This also includes the configuration you have saved to the running configuration because each time you save your configuration in the WebGUI, the BIG-IP system issues the following tmsh command: save / sys config partitions all. This is displayed in the following output:

421 421


Feb 25 01:36:04 bigip02 notice tmsh[18474]: 01420002:5: AUDIT - pid=18474 user=root folder=/Common module=(tmos)# status=[Command OK] cmd_data=save / sys config partitions all When using tmsh, the changes you have made to the system will only be applied to the running configuration. In order to save the changes to the text configuration and binary configuration, you will have to issue the command tmsh save

/sys config. As previously mentioned, the changes will also be saved when an administrator performs a change through the WebGUI as it will also launch the command tmsh save /sys config partitions all .

422 422


Text Configuration Files The text configuration is stored in multiple files under the /config directory. These files include: ▪

bigip.conf – This file contains objects for managing local traffic including virtual servers, load balancing pools, profiles, policies, SNAT’s and traffic group object associations.

423 423


bigip_base.conf – This file contains the BIG-IP system specific configuration such as network components which includes self-IP addresses, VLANs, interfaces, device trust certificates and traffic group definitions. When you synchronise the configuration between multiple BIG-IP devices, this file will not be transferred and is device specific.

bigip_gtm.conf – This file contains unique GTM configuration properties such as servers, datacentres with their respective virtual servers and wide IP addresses.

bigip_user.conf – This file contains all user roles on the BIG-IP system.

Binary Configuration Files Prior to BIG-IP 9.4.0, changes made to the BIG-IP configuration were only saved to the running and text configuration (bigip.conf and bigip_base.conf). Whenever the BIG-IP system booted up the text configuration would have to be parsed, validated and loaded into the MCP database and running configuration. This process was very time consuming and could slow down system performance on startup. To solve this issue, the developers enabled the functions Binary Load and Binary Save which allows the BIG-IP system to save and load the configuration image directly from the storage instead of the text configuration files, thus eliminating the need to validate, parse and load the text configuration files. This has made the system startup much more efficient. The BIG-IP system uses the following files to store its binary configuration: ▪ ▪

/var/db/mcpd.bin /var/db/mcpd.info

The binary configuration files are updated by the mcpd process, and how this process works is a mystery and not revealed to the public. However, the binary configuration is updated whenever the mcpd process is working with configuration in some way. After comparing the timestamps of the files and trying both the command tmsh save /sys config and tmsh load sys config the binary configuration files were updated. Sometimes, you might experience issues with the binary configuration and are forced to reload the binary configuration from the text configuration. This can occur when the text configuration (/config/bigip.conf) and the running configuration (in memory) loaded into mcpd process from the binary configuration (such as /var/db/mcpd.bin) are out of synchronisation. Or it can occur when you are unable to load the configuration into the running configuration using the tmsh load sys config command. In order to forcefully reload the configuration, please use the following instructions: These instructions will require you to reboot your system which might affect your environment.

424 424


1. 2.

Log on to the BIG-IP system using CLI Create an empty file named forceload under the /service/mcpd/ directory by typing the following command:

touch /service/mcpd/forceload a.

If you are performing the action on a VIPRION system, please issue the following command instead:

clsh touch /service/mcpd/forceload 3.

Then reboot the BIG-IP system by typing the following command:

reboot a.

If you are performing the action on a VIPRION system please issue the following command instead:

clsh reboot Loading and Saving the System Configuration All of the configuration changes applied on the BIG-IP system which are added from within tmsh are loaded into the running-configuration. You can both save and load the entire configuration. When using the command tmsh save /sys config, the entire configuration currently existing in the running-configuration will be saved to the configuration files. This includes any and all changes made to the system since the last tmsh save /sys config command. The same process can be performed in the other direction. Using the tmsh load sys config command, the current running-configuration is removed and replaced with a new version based on the configuration files located on the BIG-IP device. Do note that you might have to reboot the BIG-IP system depending on the configuration changes loaded into the running-configuration.

425 425


It is also possible to factory reset the device using the default configuration files located on the system. The default configuration is stored in the following file: /defaults/defaults.scf. Depending on which commands you run on your BIG-IP device, the impact of consequences of the commands are summarised below:

tmsh save /sys config ▪

This command has no impact on the BIG-IP system

tmsh load sys config ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Rebuilds all local traffic object stored in the bigip.conf Rebuilds all network objects stored in bigip_base.conf Rebuilds all system user accounts stored in bigip_user.conf Updates system maintenance account settings stored in bigip_user.conf Maintains the management IP address Maintains the BIG-IP license file Maintains the files stored under /shared/ folder Maintains modified bigdb variables

tmsh load sys config default ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Deletes all local traffic objects Deletes all network objects Deletes all system user accounts Resets the password for the default system accounts (admin/admin and root/default) Maintains the management IP address Maintains the BIG-IP license file Maintains the files stored under /shared/ folder Maintains modified bigdb variables tmsh save /sys config saves the entire BIG-IP configuration no matter who performed the change. If the change is present in the running-configuration, it will be saved to the configuration files. The command tmsh load /sys config can only be performed by users with the role Administrator or Resource Administrator. All other roles will receive an error message.

Administrative Partitions In version 9.4.0, Administrative Partitions were introduced. A BIG-IP device gives you the ability to create additional user accounts and assign specific user roles to each of these accounts. This is beneficial because it gives you the ability to divide your administrative tasks among different employees whilst limiting the access each user has to only that required. This role based access control (RBAC) mechanism contributes to a strong security policy.

426 426


F5 has taken this a bit further and enables you to divide the configuration of the BIG-IP device into different Administrative Partitions. This enables you to segment your configuration into different application groups where each administrative group responsible for that application will get their respective rights. Each administrative group will only work in their own administrative partitions thus prohibiting them from affecting other applications. To put this into perspective. If you have an Exchange team and a Citrix team, you can create one administrative partition for Exchange and one for Citrix. Each team would only get access to their respective partitions. This is very flexible and secure at the same time.

How Do Administrative Partitions Work? An administrative partition is only a logical container for BIG-IP objects. These include virtual servers, pools, profiles, monitors etc. When dividing the objects into different containers, you change the administrative rights from having access to all resources to only the ones linked to your user accounts. Do note that User Roles will still have the same function as when you are not using administrative partitions. For instance, an Operator will still only have operator rights in its particular partition. You define what administrative partition a user has access to under the user account in the field Partition Access. You can either give access to only one partition or all (this is also known as universal access).

427 427


The following objects can be divided into separate partitions: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Virtual Servers Pools Pool Members Nodes Custom Profiles Custom Monitors SSL Keys Certificates Certificate Revocation Lists iApp Templates and Application Instances

You can even create an administrative partition where all of the user accounts are configured. You can assign the ITSecurity Department to this partition and they can be the only users able to create and manage other user accounts. This is just one of many beneficial ways you can divide the configuration of the device.

Referencing Object in Different Partitions The default partition that is created on the BIG-IP system on first startup is called Common. One advantage of the Common partition is that you can from another partition, reference objects contained in common. This is great because you will not have to recreate objects already existing on the device. To give an example. In common, you have created a client SSL profile with a wildcard certificate (same FQDN except for the last subdomain such as *.test.com) called wildcard_test_com_ssl. You have two different HTTPS virtual servers, each in a different partition, one for Citrix and one for Exchange. They would like to use the same client SSL profile for their applications and they can easily do so by just reference the certificate in their configuration. The Common partition is considered to be the “home” for all other administrative partitions.

428 428


When you are referencing a node that was created in the Common partition and it is used as a pool member in Partition1, the pool member object itself will also be created in the Common partition. The pool object will belong to Partition1, but the pool member will belong to Common. If you do not create any additional administrative partitions, all configuration objects will be placed in the common partition. When an object has been placed in its partition “folder�, it cannot be moved to another partition. It will first have to be removed and then recreated in the new partition.

Limitations With Administrative Partitions Objects created in administrative partitions other than Common cannot be referenced by objects in Common or any other partition apart from the one they have been created in. This is limited for security reasons. Referencing can only go from each additional partition to Common. Another limitation is that, if an object has been created in one partition, it cannot be recreated in another one. For instance, if have created a virtual server with the IP address of 10.10.1.100 in Partition1 then you cannot create a virtual server with the same IP address in Partition2. Even though Partition1 may not be visible for you when working in Partition2, the BIG-IP system will know that the object already exists in Partition1 thus prohibiting you from creating it in Partition2.

Navigating Between Partitions When a user logs on to the WebGUI, they will be automatically placed in the partition they are assigned to and will be unable to navigate to another. Users with universal access (access to all partitions) will automatically be placed in Common and have the ability to navigate to any other partition configured on the system. When logged on to the WebGUI, you can change the partition by going to the drop-down list called Partition that is located in the top-right corner next to the Log Out button. Click on the drop-down list and select the partition you would like to enter.

When you are logged on to the CLI and are currently in tmsh, you can easily change partition by typing the command:

(/Common) (tmos) # cd /[partition name] If your user account has Universal Access, be sure to be in the correct partition when both troubleshooting and modifying the configuration. It can be very confusing if you are not aware that other partitions exist on the BIG-IP system.

429 429


How to Create Administrative Partitions To create a new partition, you will need to be assigned the Administrator or Resource Administrator role. Go to System > Users > Partition List > Click Create. Then just enter a name of the partition and decide if you would like to inherit the traffic group and device group configuration from Common. Click Finished when you are complete.

Effect of Load/Save on Administrative Partitions In BIG-IP versions older than 11.3.0, the tmsh save /sys config command will only save the configuration in the running-configuration to the administrative partition you are currently administrating. The same applies when utilising the command tmsh load /sys config which only loads the configuration stored in the administrative partition you are currently administrating. In order to load or save the configuration to all partitions, this has to be specified in the command.

tmsh save /sys config partitions all tmsh load /sys config partitions all This was changed in BIG-IP version 11.3.0 and forward. Now when utilising the command tmsh save /sys config or tmsh load /sys config, it will save/load the configuration in all administrative partitions. And this is still the default behaviour. If you have the need to save/load the configuration in a specific administrative partition, you can add the current-partition or partitions options with the tmsh load/save commands.

User Roles One of the major aspects of securing the access to your BIG-IP system is to make sure that the right people have access to the device, but also to make sure that they do not have more rights than they actually need. The User Role’s function on the BIG-IP system is the means of controlling the access to the BIG-IP system resources. Each administrative user will be assigned a user role and depending on the role you will receive a certain set of permissions to the BIG-IP system resources. A user role will define: The BIG-IP system resources that the administrative user can manage - For example, a user with the Certificate Manager role can only manage device certificates and keys and perform Federal Information Processing Standard (FIPS) operations. Whereas, a user with the Resource Administrator role will receive complete access to all partitioned and non-partitioned objects on the system. The administrative tasks that a user can perform on those resources - For example, a user with the Auditor role will only be able to view configuration data but not create, modify or delete any data. Whereas, a user with the Administrator role will receive full access to the BIG-IP system. The complete collection of all user roles and their capabilities are collected in the following table:

430 430


User Role Administrator

Resource Administrator

User Manager

Manager

Certificate Manager iRule Manager

Application Editor

Acceleration Policy Editor Firewall Manager

Web Application Security Administrator

Application Security Editor

Fraud Protection Manager

431 431

Description This role will grant the user complete access to all of the objects on the BIG-IP system. This user role will also have access to all partitions on the BIG-IP system and this cannot be changed. The users with the Administrator role also have the permission to change their own passwords. This role will grant the user complete access to all objects on the BIG-IP system except the user account objects. They will also have access to all partitions on the BIG-IP system and this cannot be changed. The users with this role also have the permission to change their own passwords. This role will enable the user to manage the user accounts on all partitions. They will be able to create, modify and view user accounts. They can also modify the password and enable/disable terminal access for any user account. The users with this role will also have the permission to change their own passwords. This role will grant the user the permission to create, modify and delete virtual servers, custom profiles, pools, pool members, nodes, custom monitors and iRules. The users with this role will be able to view all objects on the system and they can also change their own passwords. A user with this role will only be able to manage device certificates and keys. They can also perform Federal Information Processing Standard (FIPS) tasks. The iRule Manager role will only provide the user with the permissions to create, modify and delete iRules. They will not be able to add them to virtual servers nor move them from one virtual server to another. This user role can also be given universal access to administrative partitions. This role will grant the user the permission to modify monitors, pools, pool members and nodes. The users with this role will be able to view all objects on the BIG-IP system and also change their own passwords. A user with this role will be able to view, create, modify and delete all BIG-IP Application Acceleration Manager policy objects and Application Acceleration Manager profiles in all administrative profiles on the BIG-IP system. This role will grant the user complete access to all firewall rules and supporting objects such as rules in all contexts, address lists, port lists and schedules. The Firewall Manger role can be given access to all partitions or limited to a single partition. This role will grant the user access to the BIG-IP Application Security Manager (ASM) security policy objects. The users with this role are only limited to ASM thus restricting them from accessing other profiles such as HTTP or FTP. This user role has access to all partitions and this cannot be changed. It can only be assigned to a user if the ASM module is provisioned. The Application Security Editor role will be granted access to view and configure most parts of the Application Security Manager module. The users with this role will not be able to access any other BIG-IP objects. However, they will be able to change their own passwords. This role also requires the ASM module to be provisioned. This role will grant the user permission to configure the BIG-IP Fraud Protection Service (FPS).


Operator

Auditor

Guest

No Access

The Operator role will be granted permission to enable or disable nodes and pool members. These users can view all objects on the BIG-IP system and change their own password. This user role will only be granted read-only access but will be able to view all of the configuration data on the BIG-IP system. It will also be able to view logs and archives. What it cannot do is view SSL keys or user passwords. This role will have access to all partitions and cannot be changed. The Guest role will be able to view all objects on the system except for sensitive information such as archives and logs. Users with this role can change their own passwords. As the name implies, users with this role will not be able to access the BIG-IP system.

Creating Local User Accounts In order to perform this action, you will have to have at least the Administrator or User Manager role assigned to your user account. Keep in mind, if you only have the User Manager role assigned, you will only be able to create an account with access to the same partition as yourself. When creating a user account, you will also have to remember that the user accounts on the BIG-IP system are case sensitive. Meaning that, for instance, PHILIP and philip are two separate accounts. You should also remember that some accounts are reserved and cannot be used. The admin account is an example of this and is therefore exempt from the case-sensitivity rule. For example, you cannot create the account Admin, ADMIN or adMin. To create a local user account, follow these instructions: 1. 2.

Log on to the WebGUI Navigate to System > Users > User List. This will display all of the users that are contained in the current partition and the Common partition. Note that all users except those with a user role of No Access have at least read access to the partition Common. 3. In the upper-left corner, select the partition you would like the user to reside in. Keep in mind that the partition you select in this step is not the partition to which you want the user account to have access to. 4. Click on Create. If it is unavailable, then you do not have sufficient rights to create a user. 5. Enter a Username for the account. 6. Enter a Password for the account. You will have to enter it twice, once in the New and once in the Confirm box. 7. In the Partition Access section, select the Role you wish to assign to the user along with the Partition the user should have the access to. When finished, click Add. 8. Repeat step 7 until you have added all of the user roles and partitions the user account should have. 9. In the Terminal Access section, select if the user account should have access to tmsh or Advanced Shell. Note, Advanced Shell is only available for accounts with the Administrator or Resource Administrator user role. 10. When done, click Finished.

432 432


Modifying the Properties of a Local User Account Before performing this action, you will have to have at least Administrator or User Manager role assigned to your user account. To modify a local user account, use the following instructions: 1. 2.

Log on to the WebGUI Navigate to System > Users > User List. This will display all of the users that are contained in the current partition and the Common partition. 3. In the upper-left corner, select the partition the user resides in. 4. In the User List, find the user you would like to modify and click on it. 5. To modify the password, simply enter a new password in the New and Confirm boxes. 6. To add a new role or partition to the account, select a Role in the drop-down list and select a Partition and click on Add. 7. To modify a role or partition, select the Role/Partition in the list and click on Edit. Then from the Role or Partition list, select a new role or partition. When done, click Add. 8. To delete a role or partition, select the Role/Partition in the list and click on Delete. 9. If you want to modify the Terminal Access for the user account, click on the drop-down list and select a different option. 10. When done modifying the user account, click on Update. You will only be able to modify those role-partition entries that you are authorised to manage based on your own user role and partition access.

433 433


Shutting Down and Restarting the BIG-IP System There are several ways of how you can restart and shutdown a BIG-IP system. All of which will cause either a failover (in an HA pair) or a disruption of traffic if the BIG-IP system is a standalone device. All of the methods are summarised in the following list: ▪

Using the WebGUI – When using the WebGUI, you will not be able to shut down the system, only restart. In order to restart the system from the WebGUI, go to the navigation pane System > Configuration > Device and in the Properties and Operation section, click the Reboot button.

LCD panel – If you have access to the LCD panel of the BIG-IP system, you will have the option to both restart and shut down the device. First, halt the system and wait 30 seconds. After that, you can either turn off or restart the device.

AOM – If the BIG-IP system is equipped with AOM, you will be able to shut down or restart the BIG-IP system from the AOM menu.

bigstart restart – This command is often referred to as a “soft reboot” as the device itself is not rebooted. The command will only restart all of the BIG-IP processes and you will need to utilise it from the Linux bash shell which also means that the user issuing the command will need to have Advanced Shell terminal rights.

Using Advanced Shell (bash) You can also halt and reboot the BIG-IP system from advanced shell (bash) using the shutdown command. We have collected some useful examples in the following table: Action Immediately halt the BIG-IP system. Immediately reboot the BIG-IP system. Halt the BIG-IP system after 10 minutes. Reboot the BIG-IP system after 10 minutes. Halt the BIG-IP system at a certain time (24hr format) where hh is the hour and mm are the minute. Reboot the BIG-IP system at a certain time (24hr format) where hh is the hour and mm are the minute. Immediately halt and power off the BIG-IP system

434 434

Command shutdown -h -t time now shutdown -r -t time now shutdown -h -t time +m 10 shutdown -r -t time +m 10 shutdown -h -t time hh:mm 10 shutdown -r -t time hh:mm 10 shutdown -P -t time now


Viewing the BIG-IP Connection Table in tmsh About the Connection Table The BIG-IP system manages each connection explicitly with the use of the connection table. The connection table contains state information about active client-side and server-side connections along with the relationships between them. It is important for the BIG-IP system to keep track of each connection in the connection table because each connection consumes system resources such as memory and CPU. Therefore, the BIG-IP system uses several different metrics to determine when a connection is no longer active and should be reaped from the connection table in order to prevent the system resources from being exhausted, causing a failure.

Connection Reaping Connections that are ended in a normal way using for instance a reset or close will be removed from the connection table automatically. However, there are many connections that remain idle and are never ended using the normal means. The reasons for this can be many and one of them would be that the client has experienced an issue and is not responding to the SYN packets being sent to it. In order to prevent the connection table being flooded with inactive connections, consuming all of the system resources, the BIG-IP system reaps these connections. Reaping means that the BIG-IP system will retire or recycle connections that would otherwise remain idle and inactive.

Viewing the Connection Table An important tool that can be used during a troubleshooting session is actually viewing the connection table for a specific virtual server, pool, pool member or even the client itself. Using this, you can gain a quick look at the activity for that particular object and use the information in your troubleshooting. In order to view the connection table using tmsh, issue the following command:

tmsh show sys connection It is also possible to add different parameters to filter the results of this command. For instance, you can match on specific client-side or server-side IP addresses or ports. In the following diagram, you can find all different filters available which you can use.

435 435


For example, if you wanted to display the active connections for the virtual server with IP address of 10.10.20.100 you would issue the following command:

tmsh show /sys connection cs-server-addr 10.10.20.100 cs-client-addr:port 10.10.20.30:54123

cs-server-addr:port 10.10.20.100:80

ss-client-addr:port 172.16.20.33:54123

ss-server-addr:port 172.16.20.100:80

For instance, by simply using this data, I can determine that SNAT Automap is enabled on the virtual server because the ss-client-addr:port contains the floating self-IP address of the BIG-IP system instead of the client’s IP address. When you issue the command tmsh show sys connection globally on the device, it will put a great load on the BIG-IP system which can cause a reboot if the command is interrupted before it is allowed to finish. Therefore, it is important to use the filters available in order to limit the results of the command. If you do need to display the entire table, wait for the command to finish collecting all of the data.

436 436


Filtering Using awk and grep If you are familiar with Linux and the tools awk and grep, it is also possible to use these in order to filter the results from the tmsh show sys connection command. It is very important to know that awk and grep will only filter the displayed results and not filter what is actually being retrieved from the command. Therefore, it is still important to use filters within the tmsh show sys connection command. ▪

Display the top 50 client IP addresses in the connection table:

config # tmsh show /sys connection | awk -F: '{print $1}' | sort | uniq -c | head -50 ▪

Display the top 50 client IP addresses in the connection table to a specific virtual server:

config # tmsh show /sys connection cs-server-addr 10.10.15.200 | grep ^[0-9] | awk -F: '{print $1}' | sort | uniq -c | head -50 When using grep and/or awk you will need to issue the commands from the Linux bash shell. Because, if you enter tmsh you will no longer have access to grep or awk as they are part of the Linux bash shell.

Additional Help Tmsh on DevCentral In the introduction of this chapter, we discussed F5 DevCentral. DevCentral is a great source of information regarding all the things that are apart of the F5 universe. Regarding tmsh, the site has an entire Hot Topics section that is devoted to tmsh. It includes the following topics: ▪

tmsh Wiki – This page can teach you new ways to create commands and automate tasks using the CLI and tmsh.

tmsh CodeShare – Here you can find sample tmsh scripts that can gather data or perform operational tasks.

Import CodeShare scripts directly to tmsh script editor – You can directly import code from CodeShare or any other source into the tmsh script editor.

437 437


Lab Exercises: tmsh Exercise 10.1 – Configuring the BIG-IP Using tmsh Exercise Summary In this exercise, we’ll create pools and virtual servers using tmsh and observe how the BIG-IP system will behave. In this lab, we’ll perform the following: ▪ ▪ ▪ ▪ ▪ ▪

Create pools and virtual servers. View the configuration files. Save the configuration. Create an UCS archive Restore from a previous UCS archive. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Have one or more servers configured on the internal network to which we can load balance traffic. This should already be configured during the Building a Test Lab chapter. The server should be running multiple services including SSH. Command-line access to the BIG-IP system.

▪ ▪

Whenever you see the notation [Key] this refers to the actual keyboard key and should not be typed as a command. For instance, the [Tab] notation refers to the Tab key on the keyboard and the [Enter] key refers to the Enter key on the keyboard.

Configuring a Pool Using tmsh 1. 2. 3. 4. 5.

Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. Log on using the account root and the password f5training. When logged on you should be in the bash shell indicated by the config# prompt. Enter the Traffic Management Shell by entering the command: tmsh Navigate to the ltm module using auto-completion by typing: lt[Tab][Enter] The command prompt should now look something like this:

root@(bigip01)(cfg-sync Standalone)(Active)(/Common)(tmos.ltm)# 6.

Next, you will create a pool consisting of the following configuration:

438 438


Object Name ssh_pool

7. 8. 9. 10. 11. 12. 13. 14.

Load Balancing Mode Round Robin

Node IPs 172.16.100.1 172.16.100.2 172.16.100.3

Port 22 22 22

Type cr[Tab] in order to write the word create using auto-completion. Then type p[Tab] which should display numerous possibilities for modules and components that starts with the letter “p�. Some examples are persistence, pools or profiles. Type oo[Tab] and the word pool should be auto-complete. Type the ? character in order to display a list of different settings for completing the command. Enter the name of the pool by typing ssh_pool followed by a space. Type {lo[Tab] and the word {load-balancing-mode should auto-complete. Type ro[Tab] and the word round-robin should auto-complete. Use auto-complete until you have entered the command:

create pool ssh_pool { load-balancing-mode round-robin members add { 172.16.100.1:22 172.16.100.2:22 172.16.100.3:22 } } 15. Verify that you have successfully created the pool using the command:

list pool ssh_pool Saving the Running Configuration to the Stored Configuration Right now, the changes you have made have only been saved to the running configuration, meaning that if you shut down the BIG-IP system in any way, you will lose this configuration. In order to save it permanently, you will have to save it to the stored configuration files. 1.

Save the changes to the stored configuration by using the following command:

save /sys config Review the Changes in the Running Configuration 1. 2.

View all the pools configured on the BIG-IP system by using the following command: list pool Are all pools being presented? Can you locate the ssh_pool?

Review the Changes in the Stored Configuration 1. Exit tmsh by using the command: quit 2. 3. 4. 5.

View the contents of the bigip.conf file by using the following command: more /config/bigip.conf When you have entered the command, you can use the space bar to page down or enter key to scroll down through the text file. Can you locate the ssh_pool? Why or why not? Exit out from the more display by pressing the q key

439 439


Create a Virtual Server Using tmsh 1. 2.

Enter tmsh once again by entering the command: tmsh Create a virtual server consisting of the following configuration:

Object Name vs_ssh

IP Address 10.10.1.100

Port 22

Profile tcp

Resources ssh_pool

3.

Navigate to the ltm module by using the following command:

4.

Now use the following command. Remember that you can auto-complete most of the configuration options:

ltm

create virtual vs_ssh destination 10.10.1.100:22 profiles add { tcp } pool ssh_pool 5.

List the virtual server’s properties on the BIG-IP system using the following command:

list /ltm virtual vs_ssh all-properties 6.

Navigate back to bash shell by using the following command:

quit 7.

Review the contents of the bigip.conf once again. Is vs_ssh listed? Why or why not? Hint: Use the more /config/bigip.conf command

8.

Save the changes directly from bash by using the following command:

tmsh save /sys config 9.

Review the contents of the bigip.conf again. Have the changes been saved now?

Verify Configuration Changes 1. 2. 3. 4.

Launch a terminal client such as PuTTY and SSH to 10.10.1.100 on port 22. Are you able to connect? Log on using the account student and the password student. Head back to the SSH session on the BIG-IP system and enter tmsh by using the following command: tmsh. Which pool member were you load balanced to? You can view this from your bash shell by using the command:

show /ltm node 172.16.100.

440 440


5.

The information is presented in the Current Connections field. To make it easier you can also reset the statistics by issuing the command:

reset-stats /ltm node 172.16.100.* Viewing the Connection Table 1.

You can also view the connection table on the BIG-IP system. Go back to the SSH connection to the BIG-IP system and use the following command:

show /sys connection cs-server-addr 10.10.1.100 cs-server-port 22 2. 3.

Open up a new terminal client such as PuTTY and SSH to 10.10.1.100 on port 22. View the connection table once again. Has the amount of connections increased?

Viewing the bigip_base.conf File 1. 2. 3.

In the previous exercises, you have reviewed the contents of the bigip.conf file. Now we’ll take a look at the bigip_base.conf file. Enter the bash shell by entering the command: bash View the content of the bigip_base.conf file by entering the command:

more /config/bigip_base.conf 4.

What configuration objects does the bigip_base.conf contain?

Viewing the tmsh Command History 1. 2.

Enter the Traffic Management Shell by entering the command: tmsh View the tmsh command history by using the command:

show /cli history or by just typing ! 3.

Locate the command list /ltm virtual vs_ssh all-properties and notice the command number. Reenter the command by using the command [Command Number]. For example: !25

Creating a UCS and SCF Backup of Your Current Configuration Using tmsh 1. 2.

If you are not presently in tmsh, please enter it once again. Create a UCS archive by using the command:

save /sys ucs tmsh_labs.ucs 3.

Where was the UCS archive saved by default?

441 441


4.

Make an SCF archive by using the command:

save /sys config file tmsh_labs.scf 5.

Where was the SCF archive saved by default?

Restore Configuration From a Previous UCS Archive 1.

During the first lab exercise, we finished the lab by creating a base UCS archive in order to have a clean configuration of the BIG-IP system. Restore the BIG-IP system to this UCS archive by using the command:

load /sys ucs baseline.ucs 2.

Go back to the bash shell and review the content of the bigip.conf file. The pools, virtual servers and other settings should now be gone. You can also run the command:

tmsh list /ltm virtual 3. 4.

Enter tmsh once again. Restore the configuration back to its previous state by loading the UCS archive we created in the previous exercise. Restore the UCS archive by using the following command:

load /sys ucs tmsh_labs.ucs Viewing the Contents of the SCF File 1. 2.

Enter the bash shell. Previously, we created an SCF backup. Review its content by using the following command:

more /var/local/scf/tmsh_labs.scf

Chapter Summary ▪

The BIG-IP system can also be configured using the command line interface (CLI) using what is known as the traffic management shell or tmsh. The tmsh is a shell used for administrating the device and to perform specific BIG-IP operations. You may also view statistics and performance data of the device.

There are two types of CLI interfaces, Linux bash and tmsh. The Terminal Access method that provides access to both is called Advanced Shell.

On the BIG-IP device, the configuration is actually stored in three different states: the text configuration, running configuration and binary configuration.

The text configuration contains all of the changes that has been saved by the BIG-IP system. These files are the bigip.conf and the bigip_base.conf.

The running configuration contains all of the configuration that is stored in the text configuration and the changes that have been made since the last save. This configuration is running in the memory of the BIG-IP system, which means that if you have unsaved configuration on the BIG-IP system, if the system is shut down without being saved, you will lose all of the unsaved settings.

442 442


â–Ş

Starting with BIG-IP 9.4.0 when the BIG-IP system first boots up, the mcpd process builds a binary configuration which creates the files /var/db/mcpd.bin and /var/db/mcpd.info.

â–Ş

In version 11.x, Administrative Partitions were introduced. This enables you to segment your configuration into different application groups where each administrative group responsible for that application will get their respective rights. Each administrative group will only work in their own administrative partitions thus prohibiting them from affecting other applications.

Chapter Review 1. You are the BIG-IP administrator and are currently creating new management accounts. You need to provide both tmsh access and Linux Bash access to the new accounts. Which of the following Terminal Access settings do you need to select? a. b. c. d.

Advanced tmsh tmsh CLI Advanced Shell

2. After logging into to the BIG-IP CLI using a terminal client you immediately receive the following prompt: user2@ (bigip01) (cfg-sync In Sync) (Active) (/Common)(tmos)# Which terminal are you presently logged on to? a. b. c. d.

tmos tmsh bash Advanced Shell

3. You are logged into the BIG-IP tmsh prompt. You would like to view the configuration of the virtual server vs_http. Which of the following commands should you use? a. b. c. d.

list ltm virtual vs_http tmsh list ltm virtual vs_http show ltm virtual vs_http tmsh show ltm virtual vs_http

443 443


4. You are the BIG-IP administrator and a couple of weeks ago you added changes to the BIG-IP device using tmsh. Another BIG-IP administrator had a service window scheduled for last night which resulted in a reboot for the BIG-IP system. This morning when you get into the office, users are complaining that that some of the websites are not working any more. It turns out that the affected sites were a part of the change you made using tmsh and the changes you performed are no longer present. What could have caused this problem? a. b. c. d.

When the configuration was saved, the text configuration file was corrupted and during the reboot the BIG-IP cleaned up the corrupted data deleting the changes that you made. The BIG-IP device loaded an old UCS archive when it booted up. You forgot to run the command tmsh save /sys config. You added the configuration changes using bash and not tmsh.

444 444


Chapter Review: Answers 1. You are the BIG-IP administrator and are currently creating new management accounts. You need to provide both tmsh access and Linux Bash access to the new accounts. Which of the following Terminal Access settings do you need to select? a. b. c. d.

Advanced tmsh tmsh CLI Advanced Shell

The correct answer is: d Advanced Shell will provide unrestricted access to the terminal. This will provide access to both the Linux bash shell and the Traffic Management Shell (tmsh). 2. After logging into to the BIG-IP CLI using a terminal client you immediately receive the following prompt: user2@ (bigip01) (cfg-sync In Sync) (Active) (/Common)(tmos)# Which terminal are you presently logged on to? a. b. c. d.

tmos tmsh bash Advanced Shell

The correct answer is: b When logged into tmsh the terminal prompt will look like the following: user2@ (bigip01) (cfg-sync In Sync) (Active) (/Common)(tmos)# The most obvious indicator is the text: (tmos) 3. You are logged into the BIG-IP tmsh prompt. You would like to view the configuration of the virtual server vs_http. Which of the following commands should you use? a. b. c. d.

list ltm virtual vs_http tmsh list ltm virtual vs_http show ltm virtual vs_http tmsh show ltm virtual vs_http

The correct answer is: a Note that since we are already in the tmsh prompt, there is no need to use the command tmsh first. This is only used when you are utilising tmsh commands from bash.

445 445


4. You are the BIG-IP administrator and a couple of weeks ago you added changes to the BIG-IP device using tmsh. Another BIG-IP administrator had a service window scheduled for last night which resulted in a reboot for the BIG-IP system. This morning when you get into the office, users are complaining that that some of the websites are not working any more. It turns out that the affected sites were a part of the change you made using tmsh and the changes you performed are no longer present. What could have caused this problem? a. b. c. d.

When the configuration was saved, the text configuration file was corrupted and during the reboot the BIG-IP cleaned up the corrupted data deleting the changes that you made. The BIG-IP device loaded an old UCS archive when it booted up. You forgot to run the command tmsh save /sys config. You added the configuration changes using bash and not tmsh.

The correct answer is: c When using tmsh, the changes you have made to the system will only be applied to the running configuration. In order to save the changes to the text configuration and binary configuration you will have to issue the command tmsh save /sys config.

446 446


14. File Transfer A common task when administering one or more BIG-IPs is transferring files to and from them. This isn’t something that’s covered by the exam, but it’s certainly something you need to know how to do. In many cases, the WebGUI can be used to transfer files, for instance, with UCS archives and SSL certificates and keys. However, this isn’t always the case, two examples being SCF files and the EUD log file. Often, in the name of speed, simplicity, or automation. You may also prefer to use command line tools, or dedicated Windows client software. The BIG-IP Linux management subsystem supports two secure file transfer protocols, SCP and SFTP. In case you are wondering what the differences are between the two: ▪

SCP : File transfer capabilities only, non-interruptible, potentially faster on high latency networks, noninteractive, good for scripting, an SSH v1 protocol, some security flaws

SFTP : Interrupt and resume file transfers, list directories, remote file removal, interactive, and SSH v2 protocol

In either case, SSH transport level encryption is used and all communications are binary. The following sections describe how to send and receive files to a BIG-IP, using both Linux and Windows clients.

Linux Client - Sending Files - SCP If you wish to transfer a file to a BIG-IP from another Linux host, things are pretty simple, you would simply use the scp command like so, replacing the parameters as necessary:

$ scp path/file_name user_name@10.11.12.99:path/ You can also use a hostname instead:

$ scp path/file_name user_name@server:path/ You can specify multiple files too:

$ scp file_name1 file_name2 user_name@10.11.12.99:path/ Or whole directories:

$ scp -r path/directory_name user_name@10.11.12.99:path/ Here are some more examples:

$ cd /etc/sysconfig $ scp arptables backup@10.11.12.99:/home/backup/arptables/2015-01-12.bak $ scp /home/backup/arptables/2015-01-12.bak backup@10.11.12.88:/var/tmp

447 447


For more details on other arguments, parameters and interactive use take a look at the manual:

$ man scp

Linux Client - Retrieving Files - SCP Use the scp command like so, replacing the parameters as necessary:

$ scp user_name@10.11.12.99:path/file_name local_path/ You can also use a hostname instead:

$ scp user_name@server:path/file_name localpath/ You can specify multiple files too (the . at the end means the remote files will be copied to the current working directory):

$ scp user_name@10.11.12.99:path/\{file_name1,file_name2\} . Or whole directories:

$ scp -r user_name@10.11.12.99:path/directory_name localpath/ Here are some more examples:

$ cd /var/tmp $ scp backup@10.11.12.99:/home/backup/arptables/2015-01-12.bak . $ scp backup@10.11.12.88:/var/tmp/2015-01-12.bak /etc/sysconfig/arptables

Common SCP Errors If the remote file specification is listed along with a No such file or directory error message, before a connection has even been made, you’ve probably forgot to use the colon : after the host name or IP address.

Linux Client - Connecting - SFTP If you wish to transfer a file to a remote BIG-IP from another Linux host, things are only slightly more involved than when using scp. Simply use the sftp command like so, replacing the parameters as necessary, to connect to the remote host:

$ sftp user_name@10.11.12.99

448 448


You can also use a hostname instead:

$ sftp path/file_name user_name@server:path/ If you need to specify a non-standard port to connect to:

$ sftp -P 2222 path/file_name user_name@server:path/ Once connected to the remote host and its SFTP shell, you can use the usual commands to navigate the remote filesystem; cd, pwd and * ls [-l]*. Note that these are not the full commands you’ll find at the shell, they are simpler equivalents with the same name, to provide familiarity and ease of use. You navigate your local file-system by prefixing each command with the letter l, as follows; lcd, lpwd and ls. For more details on other arguments, parameters and in particular, file modification tools, take a look at the manual:

$ man sftp

SSH v2 support is required on your host.

Linux Client - Sending Files - SFTP Once you’ve connected to a host and have navigated to the appropriate local and remote directories as required, you can send files to it like so:

$ put path/local_file_name If you’d like the file to have a different name on the remote host:

$ put path/local_file_name path/remote_file_name To send a whole directory:

$ put -r path/local_directory_name

Linux Client - Retrieving Files - SFTP Once you’re connected to a host and have navigated to the appropriate local and remote directories if required, you can retrieve files from it like so:

$ get path/remote_file_name

449 449


If you’d like the file to have a different name on the remote host:

$ get path/remote_file_name path/local_file_name To send a whole directory:

$ get -r path/remote_directory_name

Key Based Authentication If using a Linux client, rather than manually authenticating by using a password when you SSH or copy files to other hosts, you can instead use key based authentication. This has two important advantages; ▪

Keys are generally considered harder to ‘crack’ than passwords, in other words, more secure (you should, of course, satisfy yourself of this through other sources if you intend to rely on this ‘fact’)

You are no longer required to interactively enter a password (if you are using scripts that perform SSH or SCP functions, passwords are longer required in them)

On the negative side of the coin, you gain the responsibility of securing your private key and distributing your public one. In order to use this authentication method, you need a public and private key pair. The private key is just that and should be secured appropriately and remain unknown to anyone but you. The public key can be freely distributed to others as necessary. The ssh-keygen command is used to generate these keys. However, before we dive into that, a few warnings and notes; ▪

Keys are generally unique to the host and user account on that host. You will therefore need to repeat this procedure as necessary should you wish to use it with multiple hosts and/or multiple user accounts

Check that a key pair doesn’t already exist, particularly if using a shared or so-called service account, the ll ~/.ssh/id_rsa* command will do the trick. There’s already a chance you or someone else already went through this procedure and may break things by regenerating the key pair

If you believe your private key has been compromised, generate a new key pair and update remote hosts as necessary (if you can remember them all)

450 450


So, now that is all out of the way, let’s go through the necessary steps. First, let’s generate a key pair (the default key size is 2048 but I always like to be specific, just in case things change):

$ ssh-keygen -v -t rsa -b 2048 Simply press [Return] when prompted for the passphrase:

Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in test. Your public key has been saved in test.pub. The key fingerprint is: SHA256:lZkJ6572wIvJflBlcHonO9LRllIeZNFhpJI8yySwbUI sjiveson@dadlaptop The key's randomart image is: +---[RSA 2048]----+ |

E o...*++. |

|

. + *o@ =.

|

|

o *o^ B

|

|

+.B @

|

|

.S *

|

|

.o o .

|

|

.*

|

|

. +.+

|

|

.=.. .

|

+----[SHA256]-----+

451 451


You should now have the following private and public key files in your ~/.ssh directory (which would have been created if it didn’t already exist):

~/.ssh/id_rsa #This is your private key ~/.ssh/id_rsa.pub #This is your public key Now, you simply need to copy the contents of the ~/.ssh/id_rsa.pub public key file to the ~/.ssh/authorized_keys file in the relevant user directory on each BIG-IP you wish to use with this authentication method. You have two ways to do so, the first being the easiest:

$ ssh-copy-id username@hostname You'll be prompted to enter your password - one last time! Alternatively, you can manually paste the contents (on a new line) into the file on the remote host, whilst it is open for editing, in vi for example. Of course, this being Linux, there are actually many other ways to update the remote hosts but if you don’t know what they are, it’s best you stick to one of the two methods detailed.

Windows Clients As with SSH, there are many, many Windows SCP and SFTP clients available but I’m only going to discuss two: ▪

Simon Tatham’s command line tool pscp is free, simple and reliable, what more could you want? Drop it into your …\system32 folder, fire up a command prompt and you are good to go. Here are some example usage:

Retrieve a remote file (/etc/hosts) and rename it:

$ pscp backup@10.11.12.99:/etc/hosts c:\temp\example-hosts.txt Do the same but use SFTP rather than the default SCP:

$ pscp -sftp backup@10.11.12.99:/etc/"host file" c:\temp\"test space.txt" Transfer a local file to a remote host (directory /home/backup/testfiles):

$ pscp c:\documents\"test space.doc" backup@10.11.12.99:/home/backup/testfiles ▪

For something Windows based, try WinSFTP: http://winscp.net/eng/index.php

452 452


15. Selected Topics Always On Management (AOM) Only available with physical devices, Always-On Management (AOM) is yet another embedded subsystem, in addition to the BIG-IP Host Management Subsystem (HMS). Its simple purpose is to provide Lights-Out management and other basic supporting functions for the BIG-IP system. AOM is accessible remotely via the HMS (using SSH) or through the serial console. It is also possible to access the AOM directly using SSH but in order to do so you will first have to assign its own dedicated IP address and a netmask. We’ll explore all these methods shortly. It’s important to note the AOM is separate from the HMS, meaning that if the AOM is reset or fails, the HMS (and TMOS and LTM) will still operate and function without any interruption. Equally, If the HMS subsystem fails it can be reset using the AOM. Even if the HMS (and thus the device) is completely turned off, with only a power cable plugged in, you can still access the AOM and start up the device.

Accessing AOM Through the Serial Console Connect a suitable cable to the port marked CONSOLE and then: When using a physical serial console port, set the baud rate in your terminal application to 19200 as this is the default. By default, the admin user account cannot login to the CLI.

1. 2. 3. 4.

5.

Launch a terminal client such as PuTTY and connect using the Serial Console Port. If you have several COM ports on your computer, select the COM port that is connected to the BIG-IP system. Log in using the default user account root and the password default. Enter AOM by pressing the following key sequence: [Esc]+(. On a British and American keyboard, this refers to: [ESC]+[Shift]+9. Occasionally, you are presented with the following message: Press Enter to deactivate another concurrent session. This means that there is a current AOM session active. Simply press [Enter] and that session will be terminated. When you have entered AOM, you will be presented with a list of available commands.

Accessing AOM Through the HMS Via SSH In order to access AOM through the HMS please use the following instructions: 1. 2. 3. 4.

Launch a terminal client such as PuTTY and SSH to the management IP address of the BIG-IP system on port 22. Log in using the default user account root and the password default. Enter AOM by typing the following command: ssh aom. Enter the user name and password. AOM uses the same credentials as the HMS which is by default root/default.

453 453


5.

6. 7.

Occasionally, you are presented with the following message: Press Enter to deactivate another concurrent session. This means that there is a current AOM session active. Simply press [Enter] and that session will be terminated. In order to display the AOM Command Menu, press the following key sequence: [Esc]+(. On a British and American keyboard, this refers to: [ESC]+[Shift]+9. When you have entered AOM, you will be presented with a list of available commands.

Directly Connecting to the AOM Via SSH In order to directly connect to the AOM, you will have to configure a dedicated IP address and a netmask (and optionally a default gateway). This is the IP address to which we then connect in order to gain access to the AOM without going through the HMS. To configure the AOM with an IP address: 1. 2. 3.

4. 5. 6.

Access the AOM using one of the other methods we’ve already covered. In the Command Menu, choose option: N — Configure AOM network The AOM will prompt for input to these questions; a. Use DHCP (Y/N)?: Obviously, enter n b. IP address (required): Enter the IP address you would like to use c. Netmask (required): Enter the netmask for the IP address d. Gateway (optional): If you want, enter the default gateway Once the settings have been saved, the AOM will return to the AOM Command Menu. In order to exit the AOM Command Menu, either press option Q — Quit menu and return to console or use the following SSH control sequence: ENTER > TILDE (~) > Period (.) You should now be able to access the AOM directly by connecting to the AOM IP address using SSH Some BIG-IP appliances can only set the AOM address through the console port.

454 454


The Command Menu When accessed, the AOM presents a Command Menu with a list of options. The options presented depend on what platform you are using. Some of the functions include setting the console baud rate, power on/off the HMS, reset the HMS and reset the AOM. Below you will find a complete list for the 2000, 4000, 5000, 7000 and 10000 series platforms;

AOM Command Menu: B --- Set console baud rate I --- Display platform information P --- Power on/off host subsystem R --- Reset host subsystem N --- Configure AOM network S --- Configure SSH Server A --- Reset AOM E --- Error report Q --- Quit menu and return to console

iRules We have covered iRules in the previous book, the Application Delivery Fundamentals Study Guide. iRules are scripts that are based on Tool Command Language (TCL) that offers the possibility of both examining and altering traffic passing between the client and the BIG-IP system (client-side) and between the BIG-IP system and the end-server (server-side). Since iRules are based on TCL, you can use many of the standard commands along with some additional BIG-IP specific extensions that will help you manage traffic more efficiently. One misconception regarding iRules is that whenever an iRule runs, the interpreter must be initiated every time. This not true at all. Whenever you save your configuration, all of the iRules on your BIG-IP system are pre-compiled into what is known as byte code. Byte code is mostly compiled and has the majority of the interpreter tasks already performed. This increases performance significantly. The functions that iRules provide are pretty much limitless and one common usage of an iRule could be the redirection from HTTP to HTTPS or select a specific pool based on data that is provided by the client. Using a HTTP virtual server, you can, for instance, send a client to a specific pool based on the host header. This gives you the chance to host multiple web servers on one single IP address (as long as the virtual server configuration can be identical for all web servers) and the iRule will be the one determining where to send each request. iRules can also be used for persistence which we covered in the persistence chapter. The iRules enables you to fully control what, when and how it should change the application traffic. The application owners will have a programming tool that gives them pretty much endless features to support their application and the iRules can either be long and complex but even short ones can be really powerful. An example of a simple iRule is the HTTP to HTTPS redirection which is written and certified by F5 themselves.

when HTTP_REQUEST { HTTP::redirect https://[getfield [HTTP::host] “:� 1][HTTP::uri] }

455 455


This iRule will be triggered when the client is sending an HTTP request. When the client sends its HTTP request it will send back a redirect to the client that changes http:// to https://. The HTTP::host and HTTP::uri are the headers contained in the client’s request and are used as variables in the iRule. So, these headers contain the original request but instead of going to http:// it will be redirected to https://.

When Should You Use an iRule? Whenever you need to add functionality to your application deployment and it cannot be solved using the built-in configuration options that the BIG-IP system offers, that is when you should use an iRule. iRules can add valuable logic to your application, whether it is URI redirections, adding HTTP headers or logging certain information, and the great thing about it is that it is centrally managed, as opposed to if you would terminate the application on several different end-servers. In those scenarios, you would have to re-configure each end-server that will receive the traffic. iRules can even be used to solve the application teams’ issues. One thing I experienced myself was that the webserver would return a faulty HTTP referrer in its HTTP response packet. As the administrator of the BIG-IP system, I wrote an iRule that changed the HTTP referrer from its faulty value to the correct value and we made the application work again. We had this iRule enabled until the application team solved the issue on the end-server. Another good example is security. There are some pretty simple yet effective iRules to help mitigate DDoS attacks, protect against phishing attacks and perform information scrubbing (obfuscate credit card details). There are plenty of already written security iRules on devcentral.f5.com. Most of the ones mentioned are now usually solved by the existing modules that F5 offers. However, some of these modules originate from the idea of an iRule.

When Should You Not Use an iRule? The rule of thumb when working with iRules is, that if the functionality exists in the built-in configuration, then that is the most efficient way of adding the functionality. Even though iRules are really fast and efficient (if written correctly), when a function is built within the core, the performance will be even better. It is also best for you to use built-in functions for stability. Whenever you upgrade a BIG-IP system you do not have to verify that the function has broken down which might be the case with iRules. Whenever you have performed an upgrade you should verify that the iRules are still functioning as intended.

iRule Components iRules are built upon the following structure: 1. 2. 3.

When a specific event occurs If the condition is true Then perform the specified action

In other words, they are built upon the following components: events, conditional statements and actions. An event is specific processing activity that acts as a trigger that causes the iRule to start processing the traffic. The conditional statements use relational operators to process the data that will return in a true or false statement. The actions define what the BIG-IP system will do depending on the result of the conditional statement. In the following example you can see how the components look in an iRule:

456 456


Rule [rule name] { When [Event] { If { [Conditional Statement] } { [action when condition is true] } } }

Event Declarations iRules are event driven and this is very important to remember. In order for the iRule to start processing the data, it needs to match an event. The events can be both on the client-side or server-side and they can be part of any layer of the OSI reference model. Some examples of events are CLIENT_ACCEPTED, which works on OSI layer 4 and matches whenever a client has established a connection and HTTP_REQUEST which works on layer 7 and matches whenever a client sends an HTTP Request. The reason it is important to remember this is because when writing an iRule, the data which you would like to process has to be available to the BIG-IP system. For example, when using the event when CLIENT_ACCEPTED you cannot use any HTTP specific conditional statements as the BIG-IP system has only processed traffic up to layer 4. Therefore, you should use when HTTP_REQUEST or HTTP_RESPONSE instead because at that moment, the BIG-IP system has processed the traffic all the way up to layer 7. It is also important to remember that you will need to configure your virtual server to process the traffic up to the layer of which you are using in the iRule. If you are processing HTTP traffic in your iRule, you will also need to add an HTTP Profile to the virtual server of which the iRule will be applied.

Operators The iRule uses conditional statements that compare the data the virtual server has processed and verifies if the data matches the statement. It also uses operators in order to match the data or multiple values. For instance, if you would like to send a client to a specific pool based on the HTTP header, you can write the following statement:

if { [HTTP::header] equals www.test.com } { Pool /Common/http_pool } In the following list you can find all of the available logical operators and relational operators: Logical Operators ▪ ▪ ▪

Not And Or

457 457


Relational Operators ▪ ▪ ▪ ▪ ▪ ▪

Contains Matches Equals Starts_with Ends_with Matches_regex

Rule Commands The rule commands of an iRule causes the BIG-IP system to perform a specific action. The different commands are displayed in the following table: Commands Query Commands

Action/Modification Commands

Statement Commands

Universal Inspection Engine (UIE) Commands

Description The query commands search for content and displays it. An example of this would be the IP::remote_addr that searches for the remote IP address of a connection and returns its value. The IP address that it returns depends on which side of the connection we are retrieving data from. These commands will alter the traffic passing through the BIGIP system. An example of this would be adding an HTTP header into HTTP requests. The statement commands states where traffic should be sent whether its pools or URLs for HTTP redirections (like the HTTP to HTTPS iRule). When directing traffic to a pool you can use the statement pool [poolname]. The UIE commands performs deep packet inspections and one of its primary use is to return a string that can be used for persistence.

iRule Events Whenever a client is communicating with a BIG-IP system, multiple events occur. During these events specific data will be revealed and the BIG-IP system can use this data and process it. iRules use these events in order to trigger and start processing the traffic. Therefore, it is very important that you know exactly which event you should use when writing your iRule. In the following list we have compiled all of the request events that occur during a HTTP transaction. In other words, these are the events that occur as the client is trying to establish a connection to the end-server. iRules are also aware of connection state, meaning that if you have an iRule that matches on the HTTP_REQUEST event it will automatically match the correct HTTP_RESPONSE event.

458 458


CLIENT_ACCEPTED This event will trigger whenever the BIG-IP system receives a new entry in its connection table. When this happens depends entirely on the protocol. If it is TCP, it will trigger whenever the TCP three-way handshake is complete. For UDP, the BIG-IP system creates a table entry for the first initial request and assigns it a timeout value. If no new segments arrive within the timeout value, the entry will be removed and the event CLIENT_CLOSED will trigger. Segments that arrive within the timeout will not issue a new CLIENT_ACCEPTED event, but instead, it will just renew the timeout value. The timeout value is configured under the UDP profile. During the CLIENT_ACCEPTED, the commands available are IP, TCP, UDP related as the data has not yet reached layer 7 of the OSI reference model. There are also a few statement commands available that can be used to direct the client to a node or a pool and a few action commands such as SNAT.

CLIENT_DATA When receiving TCP traffic, this event will trigger whenever data is received from the target node after the TCP::collect command has been issued. During UDP traffic it will trigger each time the BIG-IP receives a segment.

HTTP_REQUEST This event will trigger whenever the BIG-IP system has fully parsed a complete client HTTP request. The BIGIP system process the method (GET, POST), URI, version and all of the headers in the request. However, it will not process the HTTP request body. Since HTTP operates on the application layer (layer 7) of the OSI reference model, the actual data that the client is requesting will be available to the BIG-IP system and can be processed and manipulated. However, this also means that the virtual server needs to be able to understand HTTP thus requiring the virtual server to be configured with an HTTP profile.

HTTP_REQUEST_DATA This event will only trigger whenever the command HTTP::collect has been used. In the HTTP::collect command you specify how much data that should be collected and when it has, the event triggers. It will also trigger whenever a client terminates its connection before the HTTP::collect has finished fetching all of the data. This event also requires an HTTP profile.

LB_SELECTED The LB_SELECTED event is triggered when the BIG-IP system has chosen a pool member.

SERVER_CONNECTED This event will trigger when the BIG-IP system has established a connection with the target node.

CLIENT_CLOSED This is the last event, regardless of when the connection is terminated.

HTTP Events iRules revolving around HTTP events are the most common types out there. There are multiple ways the iRule can interpret the communication and alter the data depending on where in the HTTP exchange you are currently at. For instance, in the following iRule, we use the User-Agent HTTP header in order to redirect mobile users to a different and more mobile friendly web page. In the following example, I have limited the different User-Agents to Android and iPhone, but the list can be easily expanded, and you can also create a Data Group to which you refer in the iRule.

459 459


We’ll discuss Data Group Lists in the following section.

The BIG-IP system will look into the User-Agent HTTP header and look for the words iPhone or Android. If these matches in the iRule, then it will redirect the traffic to a different pool. If not, the iRule will immediately exit from the currently executing event in the currently running iRule.

when HTTP_REQUEST { switch -glob [string tolower [HTTP::header User-Agent]] { “android” “iphone” { pool /Common/mobile_site_pool return } } }

Data Groups Lists Data Groups Lists are objects you can create on the BIG-IP system that will act as a list that can be used within an iRule. You can create a list of just keys or a list that contains matched keys with values. The Data Groups are a particular type of memory structure within iRules and the unique thing about Data Group Lists is that they are stored permanently as part of the configuration and not in the iRule itself. The data group lists can be part of the bigip.conf or stored in a separate file known as an external data group list. The benefit of making the Data Group Lists part of the configuration is that the lists will be pre-populated with data before the iRule ever runs. Therefore, Data Group Lists are excellent for storing URI redirection mappings, authorised IP ranges (ACL) or specific key words. Pretty much any static data can be stored within a Data Group and then be used in one or multiple iRules. Also, since the data groups are configuration objects, they are also shared among the device groups. Another benefit is that you can modify the data group without ever touching the iRule that is referencing it. So if for instance, you receive a new external IP address that needs to be allowed to communicate with your virtual server, you can just modify the Data Group List that the iRule (that is used by the virtual server) is referencing. You can modify a Data Group by using CLI, WebGUI or the iRule editor which we’ll discuss later on in this chapter. One negative aspect would be that iRules cannot affect configuration objects. This means that you can only read, sort and reference data groups within the iRules but not actually modify them. This has to be done manually using the CLI/WebGUI or scripted using tmsh or iControl. In the following diagram you can see how a Data Group List looks like:

460 460


Data Group Lists can also be referred to as a Class. A class is exactly the same thing as Data Group List and both terms are correct. This can cause confusion but when someone is talking about classes or data group lists they are referring to the exact same thing.

What Are the Benefits of a Data Group? In the previous section, I mentioned some of the benefits of Data Group Lists, and there is really one negative aspect and that is, like mentioned earlier, that iRules cannot modify the Data Group itself. They must be modified manually or by a script using tmsh or iControl. Creating multiple if statements in order to match multiple entries is acceptable to a certain point. If you have more than 10 entries, it is much more efficient to create a Data Group List and refer to this in the iRule. Thanks to the indexed and hashed format used by the Data Group, a list containing 100 or even 100,000 entries will have roughly the same performance. Since Data Group Lists are also part of the configuration, they will also survive failovers and reboots.

How Do I Use Data Group Lists? In order to use Data Group Lists within your iRules, you will have to use the class command. In the following example we can see how a data group list can be referenced. In this example we want to limit the access to the virtual server. It will only allow access to the pool if the data group list Allowed_IP is matched. If it does not match the incoming request, it will be silently discarded.

461 461


when CLIENT_ACCEPTED { if { [class match [IP::client_addr] equals “Allowed_IP� ]} { pool http_pool } else discard }

Creating Your iRule You can create iRules using both the tmsh and the WebGUI. In order to create an iRule from the WebGUI, use the following instructions: 1. 2. 3. 4. 5.

In the Navigation tab, go to Local Traffic > iRules. In the upper right corner, click Create. In the Name box, enter the name of your iRule. In the Definition box, enter the code of your iRule. When finished, click Finished.

When you have finished creating your iRule, remember to assign it to your virtual server.

The iRule Editor A great tool for creating iRules is the iRule Editor available at DevCentral. The iRule editor is an editor that will provide you with full syntax highlighting, auto-complete, integrated help and colourisation. You can directly connect to your BIG-IP system to create iRules but also modify existing ones. When you save them, they will be automatically updated and uploaded to the BIG-IP system. You can also update Data Group Lists.

462 462


Learn more iRule Wiki In order to learn more about iRules you can visit the iRule Wiki where you can find the different commands, events, functions, operators etc. https://devcentral.f5.com/wiki/iRules.HomePage.ashx

CodeShare Most of the times when you need to add functionality to your virtual server, there are already someone that has created an iRule for that purpose or that is very similar to what you need. Therefore, when you need to add an iRule, visit CodeShare. Here you can search for submitted iRules that can be used right off the spot. https://devcentral.f5.com/codeshare/topic/irules

Additional Literature You can also buy Steven Iveson’s book “An Introduction to F5 Networks LTM iRules”. This book focuses on teaching the reader to write their own iRules and the individuals with no prior programming knowledge will find this book invaluable. You can find it at the following locations: Google Books: https://books.google.se/books/about/An_Introduction_to_F5_Networks_LTM_iRule.html?id=WWqlBAAAQBAJ&redir_e sc=y Amazon Kindle: http://www.amazon.com/An-Introduction-Networks-Ltm-iRules/dp/1291333193 iRules are a topic you will see on the exam. You should be able to read and understand iRules and know what they are doing. You should also remember in what scenarios you are required to use an iRule.

iApps Easier deployment of Applications Creating a virtual server for a web server is usually pretty straight forward. There are however some applications that requires far more consideration and configuration in order to function properly. Some of these applications are Microsoft Skype™, Microsoft Exchange™ or Citrix. These applications will most likely require you to create multiple different virtual servers that handle parts of the application, add certain profiles or add specific iRules. In order to make deployment easier for the BIG-IP administrator, iApps was created. Even though they require a lot of information, if the questions are answered correctly and the application is setup according to the best-practices, the application should function right off the spot. iApps are configured by answering a series of questions on how the application is configured and how the network topology is designed. When all of the questions are answered, and you click finished, the BIG-IP system will create all the configuration objects necessary to correctly setup the application. These configuration objects are all contained under a centralised application service and the need to configure detailed configuration objects are removed.

463 463


iApps Framework In the WebGUI, there are two different sections of iApps, Templates and Application Services.

Templates In the template, section you store the configuration of an iApp. The template contains the different configuration sections and how they are presented to the administrator. Some iApp templates are created by iApp developers working at F5 in cooperation with the respective vendors, such as Microsoft or Citrix. iApps can also be created by the BIG-IP administrators themselves. The BIG-IP system is delivered with some built-in iApps but it is also possible to download them from DevCentral.

Application Services When you have used an iApp, the end-result will be stored in the Application Services. When creating the application, the BIG-IP administrator chooses to create a new application service and then referencing to the iApp templates stored in the Templates section. When the questions of the iApp template have been answered all, of the different configuration objects, such as virtual servers, pools, pool members, nodes, profiles etc. will be added to the Application Service as one combined object. In the following diagram, you can see how it looks to fill out an iApp template:

464 464


And the end-results look like this:

Strict Updates iApps are a great way to create highly advanced applications but there is a down-side to it. In order for iApps to function correctly, the configuration of the application has to utilise best-practices and match a certain scenario. The different scenarios can be read in the deployment guides that F5 provides for each specific application. Therefore, if the application owner has gone out of the standard even just a little bit, the iApp will fail to work and you will need to manually tweak your application in order for it to work. By default, you are not able to modify the configuration objects of an iApp, they are locked by the Application Service. In order to go around this problem, you can disable Strict Updates. This will give you the ability to modify the configuration objects that the iApp created. In order to disable strict updates, please use the following instructions:

Disabling Strict Updates 1. 2. 3. 4. 5. 6.

In the main tab, go to iApps > Application Services. Click on the Application Service for which you want to disable strict updates. When the application service has finished loading, go to Properties. In the properties section, chose Advanced. Uncheck the Strict Updates checkbox. Click Update.

465 465


There is one thing that you need to keep into consideration when manually modifying an iApp and that is, that if you go into the iApp after manually modifying it, when going to Reconfigure and clicking Finished, all of your modifications will be reverted back to their normal value. This can cause serious issues, and without a backup of the configuration, you might not know what the values were prior to reconfiguring the iApp. Because of this sole reason, some BIG-IP administrators just use the iApps as a starting point and then manually modify it and never again touch the iApp.

What is a Route Domain? Route Domains are configuration objects on the BIG-IP system that isolates network traffic. This concept works in conjunction with Partitions where you assign a route domain to each partition, thus separating each application. The segmentation works by creating separate routing tables for each partition. The concept of segmenting traffic by using different routing tables is very common within the networking industry and many vendors use it. However, using different names such as Vsys or Routing Instances. When using Route Domains, you will have the opportunity to use the same IP address multiple times as long as they each reside in their own route domain.

Benefits of Using Route Domains One big benefit of using Route Domains is for hosting services. You might have multiple customers load balancing traffic using your BIG-IP system and using Route Domains you can isolate each customer within their own partition and route domain. Since the IP addresses can be the same as long as they are separated by route domains, the customers can have the same address scope and load balance traffic to the same pool members. The scenario is presented in the following diagram:

466 466


Route Domain IDs Every time you create a route domain you will have to assign a Route Domain ID. This ID is a numerical identifier used with self-IP addresses, virtual addresses, pool members, nodes and gateway addresses in order to define which route domain the configuration object is assigned to. This is done by appending %RouteDomainID to the IP address of that object. For example: ▪ ▪ ▪ ▪

172.16.100.1%2 - Node 172.16.100.1:80%2 - Pool Member 10.10.1.100%2 - Virtual Address 10.10.1.100:80%2 - Virtual Server

All of the previous examples are part of the same route domain and the exact same entries can be created for any other route domain. If you do not add the route domain ID to the configuration object, it will be automatically assigned to the default route domain ID for that partition (0 for common). Keep in mind that the Route Domain ID needs to be unique, two route domains with the same ID cannot exist.

Parent ID Each route domain can have a parent route domain. This is identified by adding a parent ID to the route domain and the BIG-IP system can use this to search for another route. This means that if you create the route domain 1 and add the parent domain as 0, the BIG-IP system will start to search for a route in route domain 1 but if no route is found it will use route domain 0.

467 467


If no route is found in the parent domain (0) it will continue searching its parent domain and this will continue until it finds a route or if it has searched through a route domain without a parent and still did not find a route.

About VLANs and Tunnels for a Route Domain For each route domain that you create, you can assign one or more VLANs, VLAN Groups or tunnels. When assigning the VLANs or tunnels, you will isolate this traffic within the route domain. Do note that each VLAN/Tunnel can only be assigned to one route domain. You can allow traffic to cross route domains by disabling the Strict Isolation option in the Route Domain.

About Default Route Domains for Administrative Partitions The default route domain feature lets you designate a “default route domain� to a partition. This will eliminate the need to specify the route domain %ID notation to each object you create as this is done automatically. For each partition, there can only be one default route domain. If there is no default route domain assigned to a partition, the objects created within the partition will be automatically assigned route domain 0 which is the default route domain for Common.

468 468


Creating a Route Domain In order to create a route domain, use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Log on to the WebGUI. Navigate to Network > Route Domains. Click Create. In the Name field, enter a name for the route domain. In the ID field, enter a route ID for the route domain. If you want to restrict traffic in this route domain to cross into another route domain, ensure that the Strict Isolation is enabled. If you would like to add a parent route domain, select a route domain in the Parent Name dropdown list. In the VLANs section, click on the VLANs you would like to add to the route domain and click on the button << to add it to the Members list. When done, click Finished.

469 469


Lab Exercises: iRules Exercise 11.1 – Directing Traffic to Specific Pools Using iRules Exercise Summary In this exercise, we’ll create multiple pools, and based on a certain URI, we’ll direct traffic to two different pools. Directing traffic to specific pools depending on certain criteria is very common in real-life scenarios, and it is very good to have practiced it in a lab environment. In this lab, we’ll perform the following: ▪ ▪ ▪ ▪

Create multiple pools. Create an iRule that will direct the traffic. Create a new virtual server where we’ll apply the iRule. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Have one or more servers configured on the internal network that can be load balanced to. This should already be configured during the Building a Test Lab chapter.

▪ ▪

Creating the Pools 1. 2.

Open up a browser session to https://192.168.1.245 and login using the admin credentials. Create three pools consisting of the following configuration:

Pool Name pool1 pool2 pool3

Members 172.16.100.1:80 172.16.100.2:80 172.16.100.3:80 Load balancing algorithm is unnecessary in this scenario since we only have one pool member in each pool. Leave the other settings at their default value

Creating the iRule 1. 2.

Navigate to Local Traffic > iRules > iRule List and in the upper right corner press Create. On the Local Traffic > iRules > iRule List > New iRule page, enter the following configuration:

470 470


Local Traffic > iRules > iRule List > New iRule ‌ Properties Name pool_select_irule Definition when HTTP_REQUEST { if {[findstr [HTTP::uri] "server=" 7] equals "1" } { pool pool1 } else { pool pool3 } } When done, click Finished Creating the Virtual Server 1.

Create a virtual server with the following with the following properties:

Local Traffic > Virtual Servers: Virtual Server List > New Virtual Server‌ General Properties Name vs_poolselect Type Standard Destination 10.10.1.103 Service Port 80 or select HTTP Configuration Protocol TCP HTTP Profile http Resources iRules pool_select_irule When done, click Finished Verifying Your Configuration Changes 1.

Open a new browser session on your PC towards the following URLs and note which pool members will receive the request:

URL http://10.10.1.103 http://10.10.1.103/?server=1 http://10.10.1.103/?server=notspecified

Pool Member

What pool members were chosen and why? The results should be immediately displayed on the webpage.

471 471


Expected results The iRule we created will search for the URI string containing server= and skip 7 characters forward and starts to read from that point. The statement says that if the value is set to 1 then it will send the request to pool1 and if it does not contain “1” it will send the request to pool3. In our first attempt we request the page http://10.10.1.103 without any URI. Since this request does not match the statement the request will be sent to pool3. In the next attempt, we request the object http://10.10.1.103/?server=1 which matches the statement and the request is sent to pool1. In our third and last attempt, we request the object http://10.10.1.103/?server=notspecified. In this request we do specify a URI, but it does not contain “1” and therefore does not match the statement, causing the BIG-IP system to send the request to pool3. If you receive multiple results that specifies multiple servers this is because the web page contains multiple objects and only the initial GET requests contains the URI of /?server=1. All of the upcoming GET requests for the remaining objects that build up the page do not have the URI which means it will default to pool3 configured in the iRule.

Exercise 11.2 – Creating Log Messages Using iRules Exercise Summary In this exercise, we’ll use the same concept as the previous lab exercise. we’ll direct traffic to particular pools based on the URI, however, at the same time we’ll log the message to the LTM log. In this lab, we’ll perform the following: ▪ ▪ ▪

Create an iRule that uses logging. Create a new virtual server where we’ll apply the iRule. Observe the behaviour.

Exercise Prerequisites Before you start this lab exercise, make sure you have the following: Network access to the BIG-IP system’s management port. Have one or more servers configured on the internal network to which we can load balance traffic. This should already be configured during the Building a Test Lab chapter. The pools pool1, pool2, pool3 which were created in the lab exercise 11.1.

▪ ▪ ▪

Creating the Logging iRule 1.

Create an iRule that has the following content:

472 472


Local Traffic > iRules > iRule List > New iRule ‌ Properties Name pool_select_logging_irule Definition when HTTP_REQUEST { set fdstr [findstr [HTTP::uri] "server=" 7] set debug "1" if { $fdstr equals "1" } { if {$debug} {log local0. "[HTTP::method] [HTTP::host]/$fdstr - URI Matched! - Sending Traffic to pool1"} pool pool1 } else { if {$debug} {log local0. "[HTTP::method] [HTTP::host]/$fdstr - URI Not Matched! - Sending Traffic to pool3"} pool pool3 } } When done, click Finished Creating the Virtual Server 1.

Create a virtual server that contains the following configuration:

Local Traffic > Virtual Servers: Virtual Server List > New Virtual Server‌ General Properties Name vs_irule_log Type Standard Destination 10.10.1.104 Service Port 80 or select HTTP Configuration Protocol TCP HTTP Profile http Resources iRules pool_select_logging_irule When done, click Finished Verifying Your Configuration Changes 1. 2. 3. 4. 5.

Launch a terminal client such as PuTTY and SSH to 192.168.1.245 on port 22. Log on using the account root and the password f5training. When logged on, you should be in the bash shell indicated by the config# prompt. Run the command: tail -f /var/log/ltm | grep URI Open up a new browser session towards the following URLs and note which pool members will receive the request:

473 473


URL http://10.10.1.104 http://10.10.1.104/?server=1 http://10.10.1.104/?server=notspecified

Pool Member

Expected Results Just like the irule pool_select, we’ll search for the URI string containing server= and skip 7 characters forward and start to read from that point. The statement says that if the value is set to 1, then it will send the request to pool1, and if it does not contain “1”, it will send the request to pool3. The mechanics of the iRule will work the same, meaning it will behave just like the pool_select irule except that if we have the set debug variable set to 1, then we’ll log each URI that we capture and be able to tell which pool will receive the incoming request. If you receive a lot of log entries that do not contain a URI when you have specified one, this is caused by the same issue as the previous lab exercise. The web page contains multiple objects and only the initial GET request contain the URI of /?server=1. All of the upcoming GET requests for the remaining objects that build up the page do not have the URI which means it will default to pool3 configured in the iRule.

Turning Off Logging 1.

Edit the iRule pool_select_logging_irule and change the bolded setting:

Local Traffic > iRules > iRule List > pool_select_logging_irule Properties Name pool_select_logging_irule Definition when HTTP_REQUEST { set fdstr [findstr [HTTP::uri] "server=" 7] set debug "0" if { $fdstr equals "1" } { if {$debug} {log local0. "[HTTP::method] [HTTP::host]/$fdstr - URI Matched! - Sending Traffic to pool1"} pool pool1 } else { if {$debug} {log local0. "[HTTP::method] [HTTP::host]/$fdstr - URI Not Matched! - Sending Traffic to pool3"} pool pool3 } } When done, click Finished 2.

Go back to your browser again and try to access the following URLs:

474 474


URL http://10.10.1.104 http://10.10.1.104/?server=1 http://10.10.1.104/?server=notspecified 3.

Pool Member

Go back to the terminal session. Does the iRule still log to the LTM log?

Chapter Summary ▪

Only available with physical devices, Always-On Management (AOM) is yet another embedded subsystem, in addition to the BIG-IP Host Management Subsystem (HMS). Its simple purpose is to provide Lights-Out management and other basic supporting functions for the BIG-IP system.

iRules are scripts that are based on Tool Command Language (TCL) that has the ability of both examining and altering traffic passing between the client and the BIG-IP system (client-side) and between the BIG-IP system and the end-server (server-side).

Data Groups Lists are objects you can create on the BIG-IP system which will act as a list that can be used within an iRule. You can create a list of just keys or a list that contains matched keys with values. The Data Groups are a particular type of memory structure within iRules and the unique thing about Data Group Lists is that they are stored permanently as part of the configuration and not in the iRule itself.

iApps are an easier way of deploying larger applications such as Microsoft Skype, Microsoft Exchange or Citrix. iApps are configured by answering a series of questions on how the application is configured and how the network topology is designed.

475 475


Chapter Review 1. True or False: iRules should not be used if the functionality exists in the built-in configuration. a. b.

True False

2. When is the iRule Event CLIENT_ACCEPTED triggered? a. b. c. d.

Whenever the BIG-IP system has fully parsed a complete client HTTP request. Whenever the BIG-IP system receives a new entry in its connection table. Whenever the BIG-IP system has chosen a pool member. Whenever the command HTTP::collect has been used.

3. You need to modify a virtual server configured using an iApp but you are presented with the error: The application service has strict updates enabled, the object must be updated using an application management interface. What setting do you need to change in order to solve the problem? a. b. c. d.

Download a different iApp template. Log into the BIG-IP system using the admin account. This cannot be solved. You will have to create your own virtual server with the configuration you need. Disable Strict Updates.

476 476


477 477


Chapter Review: Answers 1. True or False: iRules should not be used if the functionality exists in the built-in configuration. a. b.

True False

The correct answer is: a The rule of thumb when working with iRules is, if the functionality exists in the built-in configuration then that is most efficient way of adding the functionality. Even though iRules are really fast and efficient (if written correctly), when a function is built within the core, the performance will be even better. 2. When is the iRule Event CLIENT_ACCEPTED triggered? a. b. c. d.

Whenever the BIG-IP system has fully parsed a complete client HTTP request. Whenever the BIG-IP system receives a new entry in its connection table. Whenever the BIG-IP system has chosen a pool member. Whenever the command HTTP::collect has been used.

The correct answer is: b This event will trigger whenever the BIG-IP system receives a new entry in its connection table. When this happens depends entirely on the protocol. If it is TCP, it will trigger whenever the TCP three-way handshake is complete. For UDP, the BIG-IP system creates a table entry for the first initial request and assigns it a timeout value. 3. You need to modify a virtual server configured using an iApp but you are presented with the error: The application service has strict updates enabled, the object must be updated using an application management interface. What setting do you need to change in order to solve the problem? a. b. c. d.

Download a different iApp template. Log into the BIG-IP system using the admin account. This cannot be solved. You will have to create your own virtual server with the configuration you need. Disable Strict Updates.

The correct answer is: d By default, you are not able to modify the configuration objects of an iApp, they are locked by the Application Service. In order to go around this problem, you can disable Strict Updates. This will give you the ability to modify the configuration objects that the iApp created.

478 478


16. Troubleshooting Hardware Introduction A physical BIG-IP appliance contains a considerable number of hardware components, all of which can fail in any number of ways. We are long past the days where we are able to actually fix a hardware fault ourselves in most cases. Often a reboot or reset does the trick but if not we usually either replace the part, or possibly the whole device. Despite that, the skills necessary to identify a fault and what hardware is involved are still required. This section will help you do just that, as well as describe how to move traffic off a faulty device onto a working one so you can troubleshoot, test or simply reboot. That’s something you might also want to do for other operational or business reasons, such as performing a software upgrade. Even if you are using Virtual Edition, some of this knowledge still applies and is useful. Despite the abstraction introduced by a hypervisor, virtual hardware in the form of software and drivers can still contain bugs and/or develop faults. If the underlying physical hardware fails, it is not certain that the symptoms are presented within the VM in a clear way thanks to the misdirection and misrepresentation virtualisation relies upon to work. As an aside, it is worth mentioning that hardware and software are often intimately related and highly interdependent. You may not think a power supply has a software component, but it must have in order to provide status and temperature data to TMOS. Equally, the NICs, compression and encryption chips and CPUs in any system run firmware or microcode that allow them to provide their functions and allow for TMOS to utilise them.

End User Diagnostics (EUD) EUD is a software program (part of TMOS) used to perform hardware tests on BIG-IP physical appliances (including VIPRION blades and Herculon devices). You would typically use it to verify a suspected hardware issue at the request of F5 support or by an F5 Support Partner. But providing an EUD when creating a case with the F5 support will greatly increase the speed of which the RMA is processed if this is necessary. F5 recommends always using the latest version available for your platform.

Obtaining the Latest EUD Software To obtain the latest EUD software, visit the F5 Downloads site here: https://downloads.f5.com/ and then; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Login as necessary Click Find a Download and then to the right of Product Family: Hardware-Specific, select the Platform Product Line: Platform / EUD Select your hardware platform from the drop-down list Click on EUD_XF-vXX.X.X Accept the license agreement by clicking I Accept If you intend to run EUD using a USB CD-ROM, download the .iso file if one is available for your platform and create your bootable CD as detailed shortly If you intend to run EUD using a USB storage device, download the .zip file and create your bootable USB storage device as detailed shortly If you intend to install and then run EUD from the boot menu, download the .im file and install the package on the device as detailed next.

479 479


Installing EUD on the BIG-IP Device Should you wish to install (or rather update) EUD on a device itself (so you can launch it from the boot menu rather than CD-ROM or USB storage device), download the .im file as described in the prior section. Next, copy the file to a suitable location on the device and install the package with this command:

$ im /path/to/file_name.im For information on copying files to a device across the network, see the File Transfer chapter.

Creating an EUD Bootable CD-ROM If you’d like to boot your device from a USB CD-ROM and drive instead, download the .iso file to a suitable location. Then create a bootable CD-ROM using an appropriate application such as; ▪ ▪

UNetbootin - Linux, Windows & macOS Rufus - Windows only

You could also create a bootable USB storage device using the iso file, but the method described next is preferable. ISO Files are only available for older platforms and consequently so is the ability to launch EUD via CD-ROM drive.

Creating an EUD Bootable USB Storage Device If you like to boot your device from a USB storage device, download the .zip file, open it and extract the files within to a suitable location. Then insert your device, browse to where the files are in a Command Prompt and run this command:

> install.bat <drive letter>:

The contents of the device will be overwritten.

Launching EUD EUD is accessible via the serial console only, on system boot. You’ll need to establish a serial console connection to the device in order to access EUD (and possibly the boot menu). Because F5 recommend all network cables should be disconnected from the system prior to running EUD, you should have physical access to the device and it’ll obviously need to be taken out of service. Not removing all cables may result in false negatives (or positives). Physical access isn’t required if you are lucky enough to have a remote serial terminal server connected that provides serial console access over the network and don’t plan to boot via USB media. If the suspected hardware issue isn’t network interface related, not removing all cables may not be an issue. Obviously, F5 don’t recommend this approach. You load and access EUD using one of these methods;

480 480


Attach a USB CD/ROM or DVD drive containing a disk with the bootable EUD image install and boot the device; the EUD will load automatically

Attach a bootable USB Storage device (memory stick, thumb drive) containing the bootable EUD image and boot the device; the EUD will load automatically

Boot the device and at the boot menu select End User Diagnostics On VIPRION systems you should run EUD from the local console of the blade being tested.

Running Tests Once EUD has been launched you’ll be presented with the EUD Menu. You can then enter an option number to run one or more tests. After a test or series of tests is performed, you’ll be returned to this menu. The menu will look something like this on most systems;

1 System Report 2 Sensor Report 3 SFP/XFP Report 4 LED Test 5 SCCP I2C Test 6 PCI Test 7 Quick System RAM test 8 System RAM Test 9 LCD Test 10 Internal Packet Path Test 11 Internal Loopback Test 12 PVA Memory Test 13 SSL Test 14 FIPS Test 15 Compression Test 13 SMART Test 17 File System Check 18 Run all Tests (Non User Intervention, Uses Normal Ram Test) 19 * Run all Tests (User Intervention Required, Uses Quick Ram Test) 20 * Display Test Report Log 21 * Quit EUD and Reboot the System In most cases it’s best to run all tests, most of which are fairly obvious from the option text. If you would like more detail, refer to this page: https://support.f5.com/kb/en-us/products/big-ip_ltm/releasenotes/related/EUD_11_4.html.

481 481


Viewing Output As well as the interactive display of results shown when running tests, you can also select the EUD Menu option to Display Test Report Log. If you have run the full suite of tests the output can run into hundreds of lines. Additionally, the test report log file: eud.log is stored on the system in the /shared/log directory and can be viewed at some later point when EUD is not running. Some EUD versions store the log file at / on the boot partition on some older platforms instead. This partition is not mounted by default; see here for instructions on how to do so: K10897: The EUD utility for some platforms no longer stores the test report log in the /shared/log directory. A successful series of tests will result in output similar to this at the console:

Completed test with 0 errors. If an issue has been identified, you’ll observe output like this at the console:

Test Complete: SMART Test: FAILED Test Complete: SSL Test: FAILED

LCD Warning Messages We discussed the LCD panel in the BIG-IP Administration chapter, Initial Access and Installation section where we learned how we can configure the management IP address through the LCD panel. When troubleshooting hardware issues on your BIG-IP appliance, the LCD panel can also be used to determine what is happening with the system. Depending on what BIG-IP platform you are running, the behaviors might be different and in this chapter we’ll only discuss the behavior of the legacy systems. Legacy systems: ▪ ▪ ▪ ▪ ▪ ▪ ▪

BIGIP 1500 (C36) BIGIP 3400 (C62) BIGIP 4100 (D46) BIGIP 6400 (D63) BIGIP 6800 (D68) BIGIP 8400 (D84) BIGIP 8800 (D88)

If you’re running the new systems, then the best way to determine how your system will behave is to review the platform guide for your respective platform.

482 482


LED Indicators The legacy systems use 4 front-panel LEDs to indicate the current status of the system. ▪ ▪ ▪ ▪

Power Status Activity Alarm

The Power LED Indicator The Power LED indicator has three different behaviors in order to report its status: on (green), error (red) or off (none). The complete list is displayed in the following table: LED Behavior Solid Green Off (none) Solid Red

Description Normal Power ON Status Normal Power OFF Status Standby Power/Failure

The Status LED Indicator The Status LED reports if the BIG-IP system is Active or Standby using the following behaviors: LED Behavior Solid Green Solid Yellow

Description Active Standby

The Activity LED Indicator The Activity LED Indicator reports if network traffic is going to the CPU for load balancing or other software processing. The internal Ethernet interfaces are sending a PHY signal to its connecting switch subsystem that relays this signal to the CPU subsystem. It may be possible for the Activity LED to flicker even though the Ethernet interfaces are not active. The Activity LED is not used as a substitute of the activity LEDs that are present on each individual Ethernet interface. LED Behavior Flickering Yellow

483 483

Description There is activity being sent from the switch subsystem to the CPU subsystem


The Alarm LED Indicator Every single alarm that can be sent to the LED indicator has been assigned an alert level. The alert will be triggered using an SNMP trap and a log message and when it is triggered it will both send this to the Alarm LED Indicator but also the LCD screen with a description of the alarm. Currently there are 5 different alert levels and they are summarised in the following table: Alert Level 0 - Warning 1 – Error 2 – Alert 3 – Critical 4 – Emergency 5 – Information

LED Behavior Solid Yellow Blink Yellow Solid Red Solid Red Blink Red

All of the alerts that affect the Alarm LED Indicator are defined in the file /etc/alert/alert.conf. It is the lcdwarn function that modifies the Alarm LED Indicator and they usually have a description of the problem. This is displayed in the following output:

} alert BIGIP_SYSTEM_CHECK_E_FAN_SPEED_LOW { snmptrap OID=“.1.3.6.1.4.1.3375.2.4.0.115”; lcdwarn description=“Fan speed too low.” priority=“3” } In this example we can see that this alarm will be triggered when the system has detected that the fan speed is too low. This message will be sent to the LCD display and the Alarm LED will change to a solid red state because it has a priority of 3. The events that trigger the Alarm LED Indicator and the LCD screen will also be written to the /var/log/ltm file. Therefore, it is also a good idea to review the log files for the log entry in order to get some more information regarding the alarm. For instance, the following event will be written to the /var/log/ltm file:

emerg system_check[11277]: 010d0010:0: Power supply #2 fan-1: fan speed (0) is too low. Modifying alert.conf It is possible to change the settings of the alert.conf if it is necessary. Just be sure that you perform a backup of the alert.conf file before you modify it. If you need to modify the alerts, please use the following instructions: 1. 2.

Log on to the CLI of the BIG-IP system. Perform a backup of the alert.conf file by issuing the following command:

cp /etc/alertd/alert.conf /etc/alertd/alert.conf.original

484 484


3.

Change the permissions of the alert.conf file from read-only to read-write by issuing the following command:

chmod 644 /etc/alertd/alert.conf 4.

Modify the alert.conf file and change the settings that you like using a text editor:

vi /etc/alertd/alert.conf 5.

Save the alert.conf file by hitting the ESC button and then typing the following:

:wq 6.

Change the permissions of alert.conf back to read-only by issuing the following command:

chmod 444 /etc/alertd/alert.conf 7.

In order for the changes to take effect, restart the alertd process by issuing the following command:

bigstart restart alertd Backing up the Original alert.conf If you have modified the alert.conf file then it is very important that you also include the file in any UCS archives. By default UCS archives do not include modified files and these needs to be added to the UCS configuration.. The files that UCS gathers when performing a backup is defined in the following file: /usr/libdata/configsync/cs.dat In order to include the the original alert.conf file into the UCS archive please use the following instructions: 1. 2.

Log on to the CLI of the BIG-IP system. Backup the existing cs.dat file in order to keep the original by issuing the following command:

cp /usr/libdata/configsync/cs.dat /usr/libdata/configsync/cs.dat.original 3.

By default, the /usr file system is mounted in read-only mode. Before editing the cs.dat file we need to remount /usr as read-write. To do this, issue the following command:

mount –o remount,rw /usr 4.

Using a text editor, modify the cs.dat file

vi /usr/libdata/configsync/cs.dat

485 485


5.

At the end of the file, add the following entries:

“#Custom UCS keys save.[number].file = /etc/alertd/alert.conf.original save.[number].file = /usr/libdata/configsync/cs.dat.original” Replace [number] with a higher number than the last key being used.

6.

Save the cs.dat file by hitting the ESC button and then typing the following:

!wq 7.

Remount the /usr file system as read-only by performing the following command:

mount –o remount,ro /usr 8.

Now both the original alert.conf file and the cs.dat file should be included in the UCS archive.

Clearing Alerts When an alert has been triggered, when accessing the LCD panel, all of the existing non-acknowledged alarms will have to be cleared before you can access and use the LCD panel. To clear the alarms, just press the Check button to clear any alerts on the LCD screen.

Clearing the LCD Warnings and Alarm LED Remotely (Using the CLI) In some scenarios you may want to remotely clear the LCD warnings, because the BIG-IP system is located far away from your location or if you want to prevent having to instruct on-site personnel on how to clear the warnings.

Clearing the LCD Panel To clear the warnings displayed on the LCD panel please issue the following command:

lcdwarn –c [level][slotid] Replace the [level] value with the specific alert level that you would like to clear. The acceptable values are: ▪

0|1|2|3|4|5

Or: ▪

warning, error, alert, critical, emergency, information

Replace the [slot] value with the slot for which the alarms should be cleared.

486 486


Specifying a value other than 0 is only necessary for VIPRION platforms.

The acceptable values are: 0|1|2|3|4|5|6|7 For example, in order to clear the LCD warnings of critical messages, issue the following command:

lcdwarn –c critical 0

Clearing the Alarm LED In order to clear the Alarm LED, you are forced to clear all LCD warnings at all alert levels and in all slots (VIPRION systems). In order to do so, issue the command appropriate for the platform in question All non-VIPRION BIG-IP Platforms:

for i in 0 1 2 3 4 5; do lcdwarn -c “${i}” 0; done Running this command on a BIG-IP platform that is not equipped with an LCD panel will only clear the Alarm LED Indicator. All VIPRION Platforms that has fully populated blade slots:

for i in 0 1 2 3 4 5; do for j in 0 1 2 3 4 5 6 7; do lcdwarn -c “${i}” “${j}”; done; done If you run the above command on a VIPRION platform that has unpopulated blade slots it will generate the following error in the /var/log/ltm log:

012a0004:4: ledSet error: LopDev: sendLopCmd: Lopd status: 1 packet: action=2 obj_id=3c sub_obj=0 slot_id=2 result=2 len=0 crc=e071 payload= (error code:0x2) This error can be safely ignored as it does not affect the traffic processing of the VIPRION system. However, it is better to use the next command to prevent the VIPRION system generating errors. All VIPRION Platforms that has unpopulated blade slots:

for i in 0 1 2 3 4; do for j in 0 1; do lcdwarn -c “${i}” “${j}”; done; done The LCD panel can also be turned off entirely by issuing the following command: tmsh modify sys global-settings lcd-display disabled.

487 487


Log Files Log files can be an excellent source of information when troubleshooting any issue. Where hardware is concerned, the /var/log/ltm log file is likely to be the most useful. See the later Facilities section for more information on what other files might be useful. Below you’ll find some examples of how hardware errors might present themselves in log files. You can find many more by visiting https://ask.f5.com/ and searching for a term such as ‘hardware error’. Memory Parity Error

info bcm56xxd[1366]: 012c0016:6: unit 0 L2_ENTRY_ONLY entry 130152 parity error Compression Card Error

crit tmm1[18601]: 01010025:2: Device error: (null) Watchdog on unit 1! Compression Card Memory Allocation Error

crit tmm7[18648]: 01010025:2: Device error: (null) qa_dc_get_flat_buffer: Allocation linked list is empty err tmm7[18648]: 01010004:3: Memory allocation failed: qa_dc_dev_init_buffer_list: Couldn't allocate memory for buffer list. err tmm7[18648]: 01230140:3: RST sent from 10.1.2.201:41805 to 10.1.1.51:80, [0x171d345:974] {peer} Compression error (Out of memory) warning tmm7[18648]: 011e0001:4: Limiting open port RST response from 251 to 250 packets/sec Fan Speed Error

system_check[7445]: 010d0002:2: Cpu 1: fan speed (7848) is too low. Fan Speed Error

emerg system_check[6462]: 010d0005:0: Chassis fan 1: status (0) is bad. Power Supply Error

010d0006:0: Chassis power supply 2 has experienced an issue. Status is as follows: FAN=bad; VINPUT=bad; VOUTPUT=bad You can view this log file using the WebGUI and menu path System > Logs > Local Traffic. Use the search box to filter the log lines (or entries) to help you find what you are looking for. At the Linux host shell CLI you can view any log file with one of these commands;

config # more /var/log/ltm config # less /var/log/ltm config # cat /var/log/ltm You can also use this command within tmsh;

(/Common)(tmos)# show sys log ltm

488 488


This has the advantage of also including log entries stored in rotated log files, which Linux host shell commands do not. You can only view rotated and compressed log files in the Linux host shell with one of these commands;

config # zcat /var/log/ltm config # gzip -cd /var/log/ltm config # mc /var/log/ltm This command also allows you to restrict output to a specific date or time range (the last hour in this example) when using tmsh;

(/Common)(tmos)# show sys log ltm range now-1h If you only wanted to display lines with the word error in them (for instance) you can use the grep command in the Linux host shell or tmsh, like so;

config # more /var/log/ltm |grep -i error (/Common)(tmos)# show sys log ltm |grep -i error The -i argument specifies a case insensitive match.

To only display lines with a critical priority (priorities are covered shortly) you can use the grep command in the Linux host shell or tmsh, like so;

config # more /var/log/ltm |grep -i crit (/Common)(tmos)# show sys log ltm |grep -i crit Particularly if you experience a hardware issue on boot, you may also want to use this command to display the Linux kernel boot log:

config # dmesg As the output is usually significant, you can ‘page’ it one screen at a time like so:

config # dmesg |less Or:

config # dmesg |more

489 489


As before, you can also filter output using the grep command as well:

config # dmesg |grep -i error |more Priorities You don’t need to know this for the exam, but should you be interested, Syslog message priorities are similar to those found on most network devices. Selecting a level normally automatically selects all lower priority levels too (although this can be controlled), as follows (low to high); ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

emerg – 0 alert – 1 crit – 2 err – 3 warning – 4 notice – 5 info – 6 debug – 7 (aka *)

Currently configured logging levels for each facility can be displayed with this command:

(/Common)(tmos)# list sys syslog all-properties You can modify logging levels for each facility with this command:

(/Common)(tmos)# modify syslog facility-from|facility-to level Facilities A syslog facility is the ‘generic’ source of a message, for instance, postfix and snmpd will both log messages using the mail Facility. A Facility could be said to represent the purpose, or function of the applications that use it, in general terms anyway. However, it’s quite possible an application could use more than one; a mail application could use the mail Facility for strictly email related messages and the auth Facility when someone fails to login to its user or management interface. Facilities are pre-defined and cannot be modified; most modern implementations define 23 Facilities. Facilities local0 to 7 are available for user-defined purposes, hence their use with TMOS and many other network device software for logging purposes. Few of the pre-defined Facilities are appropriate. You would generally use one of these Facilities when logging from within an iRule. HMS, TMM and LTM related logging Facilities, the source application or process that creates the messages and the associated local log files are as follows;

490 490


Facility local0 local1 local2 local3 local4 local5 local6 local7 cron daemon

kern auth

authpriv mail user

Description BIG-IP (including TMM, iRules and LTM) messages Enterprise Manager and APM messages GTM and Link Controller messages ASM messages ITCM Portal and server (iControl) messages. Packet Filter messages. httpd (webGUI) messages. Linux boot messages. Cron daemon messages. System daemon (named, ntpd, sshd, Advanced Routing Module and other Linux daemons which don’t have a dedicated facility) messages. Linux kernel messages (ie. HDD errors) Authentication messages that don’t contain sensitive information. Includes login attempts and will provide an indication of dictionary attacks. Authentication messages that contain sensitive information. Mail system service messages. User process related messages.

Directory /var/log/ltm

WebGUI Logs > Local Traffic

/var/log/em /var/log/apm /var/log/gtm /var/log/asm /var/log/ltm /var/log/pkfilter /var/log/httpd/httpd_errors /var/log/boot.log /var/log/cron /var/log/daemon.log

Logs > Access Policy Logs > GSLB Logs > Local Traffic Logs > Packet Filter

/var/log/kern.log /var/log/secure

/var/log/secure /var/log/maillog /var/log/user.log

Not all messages will be logged using a Facility; the Linux host operating system for instance. Generally, these messages (not for the auth, authpriv, cron, daemon, mail and news Facilities) will be logged in /var/log/messages and are what’s shown when viewing Logs > System in the webGUI. This file is also used within SCCP/AOM subsystems (in relation to the SCCP/Hardware Watchdog and interface errors). The /var/core directory is also used as the location to dump core files (in cases of full system failure). On a healthy system it should be empty. There are also a few other files without a facility; ▪ ▪ ▪

/var/log/tmm – Messages from TMM processes only /var/log/tomcat(4)/* - webGUI Java errors /var/log/tmm.start - records system boot events

491 491


Perform a Failover When you have configured your BIG-IP systems in a high-availability setup and are experiencing issues with your application delivery environment, it might be appropriate to perform a failover when troubleshooting. There are scenarios where it is very hard to tell if the problem is related to hardware or software. Therefore, when you are troubleshooting and you cannot find any issues with the configuration or the software the only option that you might have left is to actually perform a failover. If a failover solves the issue, then the problem is most likely related to the hardware. This does not necessarily mean that the hardware problem exists on the BIG-IP system. It could also be that the devices between the client and BIG-IP and the BIG-IP and the end-servers are experiencing an issue. The BIG-IPs might be connected to the same upstream or downstream switches but the port that unit A is connected to are experiencing issues. Therefore, when traffic is failed over to unit B it will flow through a different port on the switch that is currently functioning correctly.

492 492


In the previous figure you can see that we have chosen to set up our application environment with complete highavailability. We have the firewalls, upstream switches, BIG-IP devices and the downstream switches which are all configured in HA pairs. The dotted line is representing where the traffic is currently being processed. However, the upstream switch is currently experiencing an issue which prohibits it from passing traffic.

493 493


This could be that the port on the switch is experiencing issues or the switch administrator has misconfigured the VLAN which the BIG-IP device is connected to. This causes a major outage since no traffic is being sent to the BIG-IP device. For some environments, there are different departments that handle different products and you may not even have access to the switches and can therefore not troubleshoot them. Either way, this is a great example of when a failover would actually solve the current issue. Since the other upstream switch is not experiencing the same issue, failing over the traffic to the standby unit will make traffic pass through a different path that is currently working. However, before you perform the failover make sure that the configuration between the devices are in sync. This means that the configuration is identical on both systems. Above figure is just an example to prove the concept and highlight when a failover is appropriate in order to solve an issue. Usually you have a fully connected network topology connecting everything together creating multiple paths between each device.

Consequences of Performing a Failover Before you consider performing a failover in your BIG-IP environment you should take into consideration, the consequences this might have. Depending on what applications you are currently running through you BIG-IP system, some applications are dependent on the state of a connection in order to function properly. Therefore, in order for the failover to be transparent to the clients currently connected to the active unit, the state of the active connections needs to be shared with the standby unit. This is known as a stateful failover and we discussed this in the High-Availability chapter previously in this book. If the current connection state is not shared with the standby unit, the connections that are being SNAT’d, persisted or using any other active connection state, will be lost when the failover is performed. This means that if a client is actively shopping on your website, when the failover occurs they will lose everything they have put into their shopping cart and will have to start over from the beginning. The same goes for the client that is currently downloading a file from a file server through the BIG-IP system. Connection to the server will be lost and the client will have to restart the download and start from the beginning. This will create a very poor experience for the enduser. Therefore, it is recommended to configure stateful failover and we covered this in the High-Availability chapter. The reason why Stateful Failover is not enabled by default is because you are not required to buy two BIG-IP devices, you can just buy one. Stateful Failover is only possible when using two BIG-IP devices in a Sync-Failover device group. This is why it is disabled by default..

494 494


Just because you have configured stateful failover you cannot be entirely sure that the user experience is unaffected. We discussed this earlier in the HighAvailability chapter and the main reason for this is that the BIG-IP device can handle complex proxy functionality such as SSL offloading and iRules which is not possible to mirror to the other devices in the failover group. With that said, you will most likely minimise the impact and this is why it is still highly recommended that you enable it. With that in mind, you should always think twice before bouncing traffic between BIG-IP devices.

How to Perform a Failover In order to perform a failover, please use the following instructions:

WebGUI 1. 2. 3.

Log on to the WebGUI using an administrative account Go to Device Management > Devices > Click on the active device Scroll down to the bottom and click on Force to Standby

CLI - tmsh 1. 2.

Log on to the CLI using an SSH client of your choice When logged into the CLI, issue the following command:

tmsh run /sys failover standby

Troubleshooting System Interfaces When troubleshooting hardware issues, making sure that the system interfaces (network interfaces) are operating successfully is very important as these system components are responsible for sending and receiving the data between the client and server. The BIG-IP system collect statistics for each interface and this is very helpful when determining the current health for a particular interface. In this section, we’ll discuss the different system interfaces that exist on the BIG-IP system along with VLANs and Trunks and what tools you have troubleshoot them.

The Network Components Hierarchy Before we start discussing the system interfaces, VLANs and trunks, we would just like to describe the hierarchy of these components and how they are linked together. This will give you a better understanding while reading through the following sections. The first component is the system interfaces which represents the physical network interfaces. These system interfaces are then either assigned to a VLAN or used to create what is known as a trunk. If you decide to create a trunk, the trunk will then be used as an interface in the VLAN. VLANs can in turn be added to what is known as a VLAN Group which is a logical container for multiple VLANs. The final piece of the hierarchy are the Self-IP addresses which we covered earlier in this book. When creating a selfIP address, you will have to specify which VLAN or VLAN Group it should be assigned to. When that self-IP address is

495 495


assigned to a VLAN, the VLAN will start operating within that particular address space. The complete hierarchy is displayed in the following diagram:

496 496


The System Interfaces When talking about the System Interfaces we mean the actual network interfaces on the BIG-IP system. The configuration for these interfaces are essential for the BIG-IP in order to receive or deliver traffic successfully. When troubleshooting, you should ask yourself questions such as; is the interface linked to the correct VLAN? Are the IP addresses correct? The System Interfaces are used to connect the BIG-IP system to other devices in the network such as next-hop routers, layer 2 devices, end servers etc. There are currently two types available: â–Ş

The Management Interface - This is a special interface which is dedicated to performing specific system management functions.

â–Ş

The TMM Switch Interfaces - These interfaces are dedicated to handle application traffic that traverses between itself, the client and the end-servers.

The system interfaces have many different properties such as MAC address, duplex mode and media speed that you as an administrator can configure. The interfaces can be assigned to VLANs where they are assigned VLAN IDs and they can also be used to build what is known as a trunk, where you add multiple interfaces to form one big interface that will increase the throughput and add redundancy.

Link Layer Discovery Protocol (LLDP) Link Layer Discovery Protocol is a Layer 2 industry-standard protocol that is used by network devices to advertise and receive the identity and capabilities of other network devices that are present on the same network. It transmits/receives this device information using what is known as LLDP Data Units (LLDPDUs). The BIG-IP system supports this protocol and using the WebGUI or tmsh you can specify exactly which content the LLDPDUs should transmit or receive when communicating with neighboring devices. It is also possible to configure how frequently and the number of neighbors on each interface that can receive messages.

497 497


The Interface Properties As a BIG-IP administrator there are plenty of properties that you can configure together with enabling or disabling the interfaces. You can set the media type, the duplex mode and configure the flow control. There are however settings that cannot be changed such as the MAC address.

The Interface Naming Convention The names of the interfaces follow a specific standard that is based upon which slot the interface is installed in and what port number it is assigned. It is constructed in the following format, [s].[p] where [s] stands for slot and [p] stands for port. Some examples are 1.1, 1.2, 2.1 and 2.2. The only interface that is excepted from this standard is the management interface which is simply named MGMT.

498 498


Viewing Interface Information Using the WebGUI you can navigate to the interfaces and view their current status (UP or DOWN). You can also view the other useful information such as: ▪ ▪ ▪ ▪

Interface availability Media Speed MAC Address Active Mode

Interface State As mentioned earlier, it is possible to disable an interface on the BIG-IP system. You can view the current state by clicking on the specific interface and reviewing the State property. It can be either enabled or disabled by simply selecting the desired status in the drop-down list. By default, all interfaces are configured with the state Enabled.

499 499


Flow Control Flow Control is the function responsible for adjusting at what rate the frames are sent between the network devices. It handles this by sending what is known as Pause Frames to the peer device as a way to pause the frame transmission so that the device can catch up on the data processing when it receives too much data than it can handle at that moment. Each data packet sent and received is handled by the First-in, First-out (FIFO) queue and if this queue is filled up it will result in data loss. This property can also be configured on the interfaces and can be set to the following values: ▪ ▪ ▪ ▪

Pause None - Disables Flow Control Pause TX/RX - Specifies that the device will honor pause frames received from its peer, but it will also generate and send pause frames when necessary. This is the default behavior. Pause TX - Specifies that the device will ignore pause frames from its peer but will generate and send pause frames when necessary. Pause RX - Specifies that the device will honor pause frames from its peer but will never generate them.

VLANs By now you should already be familiar with the concept of VLANs (Virtual Local Area Networks) as this is covered in the F5 Application Delivery Fundamentals 101 Exam. Just as a quick reminder, VLAN is a logical segmentation of local area networks (LANs) where all hosts that reside in that particular VLAN should operate in the same IP address space. VLANs have the following benefits: ▪ ▪ ▪

They reduce the size of broadcast domains which increases network performance. They enhance security by for instance, placing high-security hosts in a segmented network space where they can transmit their sensitive data. They give the ability to add hosts residing on different physical location to be part of the same logical network

When the BIG-IP system needs to communicate between VLANs there is no need of adding physical routers. The BIGIP can handle this on its own. The next step when configuring the system interfaces on the BIG-IP system is to assign the interfaces to one or more VLANs. When going through the Initial Setup Wizard you created the external and internal VLAN which you assigned the interfaces 1.1 and 1.2 respectively.

Assigning Interfaces to VLANs For every VLAN that you create you will need to assign one or more interfaces. By assigning an interface to a particular VLAN, you will indirectly control which hosts the BIG-IP system can communicate with. To present an example, in our lab exercise we added interface 1.1 to the external VLAN. When we create a virtual server under the same address space as the external VLAN it will indirectly be associated with that VLAN meaning that when clients are trying to access the virtual server on its respective IP address, the traffic will be sent to the external VLAN thus sent to the interface 1.1. When assigning an interface to a VLAN determines which method is used to send and receive traffic. These methods are either Port-based Access or Tag-based Access which we cover in greater detail in the following sections.

500 500


Port-based Access Method When the Port-based Access Method is used, the BIG-IP system will accept the frames based upon the fact that it was received on an interface that is a member in that particular VLAN. As mentioned earlier, this is defined by the way the interface is assigned to the VLAN. With the Port-based Access Method, the interfaces are added to the VLAN as Untagged. This means that when the BIG-IP sends out frames through the untagged interfaces it will not contain a tag header. This will limit the interface to only operate in that particular VLAN. If you need to assign interfaces to multiple VLANs (thus sending and receiving traffic to/from multiple VLANs) you will need to assign them as Tagged interfaces.

Tag-based Access Method When assigning interfaces to VLANs as Tagged you will add the tag header to the frames that will identify which VLAN the traffic belongs to. Using this method, the BIG-IP changes its behavior and starts evaluating the Tag header in the frames and accepts traffic based on this. When the BIG-IP sends out frames on the particular interface it will add the Tag header in order for the receiving node to receive and accept the fames. The diagram below shows the difference between the two methods:

501 501


Creating and Managing VLANs When you create a VLAN you will give it a name and a VLAN Tag (Identifier). If you leave the VLAN Tag blank, the BIG-IP system will automatically generate a tag for you.

To create a VLAN, use the following instructions: 1. 2. 3. 4. 5. 6.

Open up a browser session to https://192.168.1.245. Log in to the BIG-IP system using an account with the correct privileges. Navigate to Network > VLANs > VLAN List and click Create. On the New VLAN‌ page, in the Name box, enter a name for the VLAN. Under the Resources area, select which Interface you would like to add along with the Tagging (Tagged, Untagged) and click Add. Add more interfaces to the VLAN if you need. Otherwise you can simply use the default settings and click Finished.

VLAN Groups A VLAN group is a logical container for at least two or more VLANs. When a client and a server resides on the same network (address space) some problems might arise with the flow of communication. If you do not have SNAT enabled, the server will send the return traffic directly back to the client. When the server breaks the flow and bypasses the BIGIP it will inevitably break the communication as well. VLAN groups help to solve this problem.

502 502


When you create a VLAN group, the two or more existing VLANs will become child VLANs within that VLAN group. The VLAN groups are intended for load balancing traffic in a layer 2 network when you would like to minimise the reconfiguration of hosts on that particular network.

Transparency Mode The Transparency Mode setting specifies how the BIG-IP system forwards a message to a host in a VLAN. The BIG-IP is capable of processing traffic both on layer 2 and layer 3. The default setting is translucent which is combination of layer 2 and layer 3. In the following table you can find the modes that are currently available: Name Translucent Transparent Opaque

Description Layer 2 forwarding with a locally-unique bit. Layer 2 forwarding with the remote system’s original MAC address preserved across VLANs. Layer 3 forwarding with proxy ARP.

Bridge All Traffic If you enable the Bridge All Traffic setting in the VLAN group, the VLAN group will forward all non-IP traffic. Having this setting disabled is the default value and keep in mind that the BIG-IP will already bridge the IP traffic.

Bridge in Standby This setting is designed for deployments where you configure a VLAN group on BIG-IP systems configured in a HighAvailability setup. Enabling this setting ensures that the standby unit can forward packets, even though it is standby mode. This setting only applies to non-IP and non-ARP frames such as Bridge Protocol Data Units (BPDUs). This setting will cause strange behaviors when used on more than one member in the device-group. The setting is only intended for configuration where the VLAN group only resides on one BIG-IP system.

Creating a VLAN Group In order to create a VLAN group, use the following instructions: 1. 2. 3. 4. 5. 6. 7.

Open up a browser session to https://192.168.1.245. Log in to the BIG-IP system using an account with the correct privileges. Navigate to Network > VLANs > VLAN Groups and click Create. On the New VLAN Group‌ page, in the Name box, enter a name for the VLAN Group. Under the Configuration area, select which VLANs you would like to add from the Available list. Select which Transparency Mode you would like to use. When done, click Finished. Configuring VLAN Groups is not an easy task and can bring the entire network down if done wrongly. In the end you will most likely have a very complex L2 infrastructure that is relying heavily on STP (Spanning Tree Protocol) not to create any loops in the network.

503 503


Associating a VLAN/VLAN Group With a Self-IP address In order for communication to work you will have to assign an address space which the VLAN will operate in. To do this you assign a self-IP address. This term has been discussed previously in this book. When adding a self-IP address to a VLAN it should represent an address space that is the same as the host currently present in that VLAN. For instance, like we have in our lab environment, the web server has the IP address of 172.16.100.1/16. This means that if we want to assign the same address space to the internal VLAN we can for instance assign it the self-IP address of 172.16.100.31/16 like we actually did in our lab exercises.

Creating a Self-IP address In order to create a Self-IP address, use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8.

Open up a browser session to https://192.168.1.245. Log in to the BIG-IP system using an account with the correct privileges. Navigate to Network > Self IPs and click Create. On the New Self IP‌ page, in the Name box, enter a name for the Self IP. In the IP Address box, enter the IP address you would like to assign. In the Netmask box, enter the netmask for the IP address. In the VLAN/Tunnel dropdown list, select the VLAN you would like to assign the Self IP. When done, click Finished.

Trunks If you read the F5 Application Delivery Fundamentals 101 Study Guide, you have already come in contact with the concept of trunks. A Trunk is a logical grouping of interfaces in order to create one single interface. The technology is very common and used by many vendors, although they have different names such as link aggregation, teaming or EtherChannel. The benefits of trunking (aggregating) interfaces are both to increase the bandwidth but also to create redundancy between interfaces. To give you an example, if you have four fast Ethernet interfaces of 100 Mbps, creating a trunk with these interfaces will give you 400 Mbps. If one interface were to fail, the only consequence would be that the bandwidth would go from 400 Mbps to 300 Mbps. A trunk can hold a maximum of 8 interfaces and you usually aggregate interfaces based on the power of two meaning you would aggregate either 2, 4 or 8 interfaces. Generally, you form trunks between the BIG-IP device and an adjacent switch and use VLANs in order to segment the traffic being sent through the trunk. When two systems use trunks to communicate with each other they are known as peer systems.

504 504


When the trunk is created it will in turn be assigned to one or more VLANs just like an interface.

How Trunks Work You are probably thinking; how do the devices ensure that frames are never sent out of order or even duplicated? The BIG-IP systems solves this by using what is known as Frame Distribution Hash which is used to determine which interface it should use to forward traffic. The Frame Distribution Hash creates a hash value based upon different values such as MAC or IP address. When the hash value has been calculated, it will send each frame matching the hash value over the same member link. You can base the hash on the following: ▪ ▪ ▪

Source/Destination MAC address - The BIG-IP system bases the hash on both source and destination MAC address. Destination MAC address - The BIG-IP system bases the hash on only the destination MAC address. Source/Destination IP address - The BIG-IP system bases the hash on both source and destination IP address.

Link Aggregation Control Protocol (LACP) We will not cover any in-depth detail of this protocol as the knowledge of how this protocol works are already expected and covered in the F5 ADF 101 Study Guide. However, when creating a trunk, you will have the option of adding this feature to the trunk. As you should already be aware of, LACP is an IEEE standard protocol that detects error conditions on member links and can redistribute traffic to other member links in order to prevent loss of traffic when a link fails. The behavior of LACP is customisable where you for instance can choose how LACP should communicate its control messages from the BIG-IP system to the peer system.

505 505


Creating a Trunk In order to create a Trunk, use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Open up a browser session to https://192.168.1.245. Log in to the BIG-IP system using an account with the correct privileges. Navigate to Network > Trunks > Trunk List and click Create. On the Trunks: Trunk List… page, in the Name box, enter a name for the Trunk. Under the Interfaces area, select which Interfaces you would like to add from the Available list. If you want to enable LACP, click its checkbox. If you enabled LACP, select which mode you want from the drop-down list. If you enabled LACP, specify the timeout (the rate which the system sends LACP control messages). The default value is Long. When done, Finished.

Troubleshooting Network Issues Network Issues can be a pain to troubleshoot as there are many factors that can cause the problem. It can be a faulty interface on the BIG-IP device or an adjacent switch. It can be caused by a misconfiguration or even by connecting the wrong cables in the network devices. I have seen large companies being completely knocked out by a loop in the network that was caused by an engineer that connected the cables wrong. The companies in question did not have Spanning Tree Protocol (STP) enabled which would have stopped the problem and the end-users would probably not even notice it. Spanning Tree Protocol (STP) is protocol that is designed to prevent network loops from being formed when switches and bridges are connected via multiple paths. It operates by exchanging Bridge Protocol Data Units (BPDUs) messages between the bridges and switches in order to detect the loops. Once a loop is detected, the loop is removed by shutting down the bridge’s/switch’s interface causing the loop. Network issues can cause severe damage to your network while sometimes being hard to discover, as the symptoms are very similar to other problems that you see in upper layers of the OSI reference model. Some symptoms include high latency, slow responses and network time-outs.

Network Statistics A great tool for troubleshooting network issues is the statistics page. It is located in the WebGUI under Statistics > Module Statistics > Network. On this page, you can view the statistics of: ▪ ▪ ▪ ▪

Interfaces Packet Filters Rate Classes Trunks

The information displayed on each page can vary depending on which type you choose. When viewing the Interfaces statistics, you will be able to see the number of bits and packets going in and out. The amount of Multicast packets along with the amount of error and drops. It will also display if there are any collisions on the interface.

506 506


It is also possible to view the statistics in the CLI using the following commands: ▪

tmsh show net interface all-properties - (BIG-IP 10.2.0 - 13.x)

bigpipe interface show all - (BIG-IP 9.x - 10.1.0)

In the following picture, you can see the output from the tmsh command:

Troubleshooting Packet Drops As a BIG-IP administrator it is important to know the causes of packet drops as some are considered expected behavior while others are an indication of an on-going problem. For instance, if a BIG-IP interface receives frames containing the wrong VLAN ID, the default behavior is to drop it. However, packet drops might also indicate problems with the configuration or with the BIG-IP system. For example: ▪

The BIG-IP system is a default deny box, meaning that if it receives traffic matching a virtual server IP address but not the service port, the packet will be discarded.

The BIG-IP will also drop packets that arrive with Frame Check Sequence (FCS) errors. A high value of FCS errors might be caused by a misconfiguration, for instance a duplex mismatch.

Packets might also be dropped due to a connection limit has been reached or if the BIG-IP is running low on memory and needs to save system resources.

Packets may also drop in various stages when being processed by the BIG-IP system. When traffic arrives with the wrong VLAN ID, it is the switchboard that drops it and the system increments the drop counter on the interface. When packets arrive with the correct virtual server IP address but with the wrong service port, the switchboard accepts the traffic but the TMM discards it.

Troubleshooting Interface Packet Drops When troubleshooting interface packet drops, it is beneficial to understand how the switchboard handles its packet flow. It handles its packet flow using the following sequence: ▪

Ingress - The ingress logic describes how the packet is handled when it is received by the device. It will either discard or accept the packet and if it does, it will determine which egress port it should be sent out on and passes it on to the Memory Management Unit (MMU).

507 507


â–Ş â–Ş

MMU - The Memory Management Unit is responsible for internal/external packet buffering for the switchboard and schedules packets for transmission. Egress - The egress logic will request scheduled packets from the MMU and transmit it to the egress port.

Ingress Drops When the switchboard receives packets, it will have to determine if it should buffer or drop the packet. It might also drop a different packet in order to make room for the one it just received. The following table consists a list of known issues for Ingress packet drops: Reason for drop Invalid VLAN ID

Description The interface drop counter will increase whenever it receives a packet with an invalid VLAN ID.

FCS Error

Packets that arrive with a Frame Check Sequence (FCS) error will be dropped by the interface.

Port Flooding

The BIG-IP switchboard will drop frames if the dynamic forwarding database indicates that the egress port for a frame is the same as the ingress port. To give you an example, if the switchboard receives frames on port 1.1 that is destined for MAC address f5:c1:02:34:10:5d and the switchboard has already learned that this MAC address resides on port 1.1 it will drop the frame. This occurs when the ingress interface receives traffic of an unknown traffic type.

Unknown Packet

Possible cause Frames contain a VLAN tag that is not configured on the port. Either a misconfiguration on the BIG-IP system or the adjacent switch. This can either be caused by a duplex mismatch (configuration issue) or that corrupt frames exist. This is likely caused by the upstream switch that needs to learn this MAC address and subsequently floods the frame out on all ports.

This can happen when a network device is sending unknown packets.

When connecting a Cisco switch with a BIG-IP device, you will often see a large number of Ingress drops as the Cisco switches send out CDP (Cisco Discovery Protocol) frames. CDP is primarily used to obtain protocol addresses of neighboring devices and discover the platform of those devices. The BIG-IP device does not understand CDP thus it should be disabled on the connected switch interfaces.

508 508


Egress Drops Egress drops are not as common as ingress drops thus making the list a bit shorter. The following table is a list of common reasons why the switchboard drops egress packets. Reason for drop Buffer Exhaustion

Description The egress ports have a Head-Of-Line register that define the total buffer space that the port can use before it enters what is known as HOL condition. When it is in this state, the egress port starts informing the ingress port to drop packets destined for the egress port in order to lighten the load on the port. When this happens, we start to see drops counter increase in the statistics.

Possible cause The link is overloaded.

Troubleshooting TMM Packet Drops If the switchboard accepts the packet, it will be forwarded to TMM for further processing. When the packet is passed on to TMM, it will have to decide to accept or drop the packet. You can view the statistics of TMM in the CLI by using the following commands: â–Ş

tmsh show sys ip-stat or tmsh show sys tmm-traffic - (BIG-IP 10.2.0 - 13.x)

â–Ş

bigpipe ip or bigpipe tmm - (BIG-IP 9.x - 10.1.0)

In the following picture you can see the output from tmsh:

509 509


The following table consists a list of known issues for TMM packet drops: Reason for drop Virtual server match

Checksum IP version number IP Option

Protocol Length

510 510

Description The incoming packet matches the virtual server IP address but not the service port. This will cause the BIG-IP system to drop the packet. The packet will be dropped if it contains an invalid L3 checksum. The packet will be dropped if it contains an incorrect or invalid IP version number. This drop depends on how the TM.AcceptIPOptions BigDB key is set. If it is disabled, the system will drop packets that contain an IP option. The packet will be dropped if it contains an invalid L3 protocol field. The packet will be dropped if it contains an invalid L3 length field.

Possible cause Misconfiguration on either the client or the BIG-IP. Corrupt packet. Corrupt packet. Client performing debug or security testing. Corrupt packet. Corrupt packet.


Connection limit

These packets are dropped because the connection limit of a virtual server has been reached. When the license for the BIG-IP is in activated or expired it will drop new packets that it receives. When the BIG-IP is in maintenance mode it will drop new packets that it receives.

License Maintenance

A connection limit is configured on the virtual server. Expired license or not activated. Maintenance Mode is enabled.

Maintenance mode is a feature that enables the BIG-IP administrator to prepare the system for maintenance. When enabled, the BIG-IP system stops accepting new connections and slowly completes the ones that are currently active.

Known Issues When troubleshooting packet drops you will need to take into considerations the known issues that your current version is facing. Therefore, review release notes, AskF5 or even DevCentral for information that might explain the reason for the packet drops.

Chapter Summary ▪

EUD is a software program (part of TMOS) used to perform hardware tests on BIG-IP physical appliances (including VIPRION blades and Herculon devices). You would typically use it to verify a suspected hardware issue at the request of F5 support or by an F5 Support Partner.

EUD is accessible via the serial console only, on system boot. You’ll need to establish a serial console connection to the device in order to access EUD (and possibly the boot menu).

When troubleshooting hardware issues on your BIG-IP appliance, the LCD panel can also be used to determine what is happening with the system. When you are troubleshooting and you cannot find any issues with the configuration or the software the only option that you might have left is to actually perform a failover. If a failover solves the issue, then the problem is most likely related to the hardware.

In order for the failover to be transparent to the clients currently connected to the active unit, the state of the active connections needs to be shared with the standby unit. This is known as a stateful failover

The System Interfaces are used to connect the BIG-IP system to other devices in the network such as nexthop routers, layer 2 devices and end-servers.

There are currently two types of system interfaces; the Management Interface and the TMM Switch Interfaces.

VLAN is a logical segmentation of local area networks (LANs) where all hosts that reside in that particular VLAN should operate in the same IP address space.

A Trunk is a logical grouping of interfaces in order to create one single interface. The technology is very common and used by many vendors, although they have different names such as link aggregation, teaming or EtherChannel.

511 511


Chapter Review 1. You suspect that you are experiencing problems with the power supply of your BIG-IP system. Which log file should you review in order to find the log entries concerning this problem? a. b. c. d.

/var/log/messages /var/log/ltm /var/log/gtm /var/log/kern.log

2. After you have performed an EUD, where will the log file be stored? a. b. c. d.

/tmp/logs/eud.log /shared/eud/eud.log /var/log/eud.log /shared/log/eud.log

3. How will the BIG-IP system handle traffic that matches a virtual server IP address, but not the service port? a. b. c. d.

Discard the packet. Reject the packet. Try to rematch to a different IP address and service port. Accept the packet.

4. What tmsh command do you enter in order to trigger a failover? a. b. c. d.

tmsh run /sys failover standby tmsh exec /sys failover standby tmsh run /net failover standby tmsh exec /net failover standby

512 512


513 513


Chapter Review: Answers 1. You suspect that you are experiencing problems with the power supply of your BIG-IP system. Which log file should you review in order to find the log entries concerning this problem? a. b. c. d.

/var/log/messages /var/log/ltm /var/log/gtm /var/log/kern.log

The correct answer is: b Log files can be an excellent source of information when troubleshooting any issue. Where hardware is concerned, the /var/log/ltm log file is likely to be the most useful. 2. After you have performed an EUD, where will the log file be stored? a. b. c. d.

/tmp/logs/eud.og /shared/eud/eud.log /var/log/eud.log /shared/log/eud.log

The correct answer is: d The test report log file will be written to /shared/log/eud.log and can be viewed at some later point when EUD is not running. 3. How will the BIG-IP system handle traffic that matches a virtual server IP address, but not the service port? a. b. c. d.

Discard the packet. Reject the packet. Try to rematch to a different IP address and service port. Accept the packet.

The correct answer is: a The BIG-IP system is a default deny box, meaning that if it receives traffic matching a virtual server IP address but not the service port, the packet will be discarded. 4. What tmsh command do you enter in order to trigger a failover? a. b. c. d.

tmsh run /sys failover standby tmsh exec /sys failover standby tmsh run /net failover standby tmsh exec /net failover standby

The correct answer is: a

514 514


17. Troubleshooting Device Management Connectivity When you have installed your BIG-IP system and you are ready to logon and start configuring it, you may find that gaining access to the management interface is not always that easy; regardless of if it is using HTTPS or SSH. There are multiple scenarios that may hinder the access to the management interface. We cannot cover them all, but we’ll present a few of them and discuss the most common issues that you may face when accessing the management interface.

Get to Know Your Environment When setting up your BIG-IP environment, there are some questions you need to ask yourself before starting, such as: ▪ ▪ ▪

How will traffic flow through the BIG-IP device? Will we divide all the VLANs into separate networks and firewall all the traffic? Will we split the management traffic into a separate subnet and firewall the traffic?

Before you can even start configuring the BIG-IP system, you need to have a design of the end-results so that you know what configuration settings you need to add. Again, you can configure your BIG-IP environment in so many ways that we cannot go through them all. On the exam, you will be presented with exhibits of different environments and you will have to understand what needs to be added and/or removed for connectivity to work. This is not something you can study, this requires real-life experience. The reason why the first exam covers TCP/IP and the OSI reference model is exactly for this purpose. For you to be able to communicate with the BIG-IP system on the management interface, you will need to be on the same subnet/VLAN as the configured management IP. If you have a firewall between you and the BIG-IP management interface, you will need to have the correct routing and the appropriate firewall rules to allow the traffic. In this case, HTTPS (port 443) and SSH (port 22).

515 515


Verify the Configuration Once you fully understand the design of the environment, start with verifying the configuration. During the initial setup, you get to configure the management IP address. There are several ways to do this which we covered in the chapter Introduction to LTM - Initial Access and Installation. You can do it using the CLI of the BIG-IP system using a console cable and the command config, or using the WebGUI when the client is on the same subnet as the BIG-IP and accessing the IP address: https://192.168.1.245 Verify that this IP address conforms to your design. Next, verify the IP address of your client. The IP address that you assign to your client will be different depending on the design of the environment. Is your client located on the same exact subnet as the BIG-IP device? Then verify that the IP address of the client corresponds to the same subnet as the BIG-IP. If you have the client in a completely different subnet and firewall the traffic between the client and the BIG-IP device, make sure that the client is in the correct subnet of the firewall and configured with the correct default gateway. You should also make sure that the firewall has a configured IP address in the management network that the BIG-IP device resides and a route to that network. Lastly, make sure that the firewall actually permits HTTPS and SSH from the client to the BIG-IP device. As mentioned earlier, the configuration will be different and is completely dependent on how the design looks. Make sure that the configuration corresponds to the design. The exam focuses on the student understanding that there might be devices on the way to the BIG-IP device that may be blocking the traffic. Analyse the different exhibits and determine how the traffic will flow.

Tools Available for Troubleshooting If you have verified the configuration and still cannot access the management interface, there are a few tools available to help troubleshoot the issue.

Ping Ping is one of the most famous troubleshooting tools available and you have probably used it a few times. Ping is used to verify if a particular IP host is reachable on the network, but it also measures the round-trip time. If you get a reply you can be sure that the routing/switching in the network functions the way it should. Ping operates using the ICMP protocol and to verify if a host is reachable, it sends an echo-request to the specified host and waits to receive an echo-reply back. If it receives it, the host is reachable. If you are having trouble accessing the management interface, start with pinging the management IP address. If there is a firewall between the client and the BIG-IP device, make sure that that the traffic is allowed.

516 516


Ping is available on most systems and on Windows you can access it by simply starting cmd.exe and typing the command ping followed by the IP address of the host.

ping 192.168.1.245 Traceroute Another popular tool is traceroute, it is used to diagnose the path (route) and measuring the delays between each hop on the way to an IP host. Traceroute also operates using the ICMP protocol and echo-request/echo-reply packets. Together with the round-trip times, each hop on the way to the final destination is recorded and the sum of the roundtrip times indicates the total time it took for the client to reach the host. Traceroute sends three (3) packets to each hop on the way and if one of them does not respond (might be blocked on that particular host) that host will just time-out and traceroute will continue sending next three requests to the next hop address. Traceroute is available on most systems but it may have a different name. On UNIX/Linux based operating systems the command is called traceroute but on Windows it is called tracert. You can access tracert by simply starting cmd.exe and typing the command tracert followed by the IP address of the host.

tracert 192.168.1.245 By default, tracert will perform a DNS lookup for each host on the way. You can turn this off by adding the switch –d. Like the following command: tracert –d 192.168.1.245.

Telnet Telnet is originally not a troubleshooting tool. It was designed to provide text-based communication to different network based devices and has been replaced with SSH. This is because telnet does not provide encryption and if you were to sniff (tcpdump) the communication you would be able to obtain sensitive information. The great thing about telnet is that it operates at the application level and gives us the advantage to verify if a particular service is running. When troubleshooting the access to the BIG-IP device, you can try and telnet either port 443 (for the WebGUI) or port 22 (for SSH). If telnet is able to establish a connection it will prompt a blank screen and eventually time-out. If telnet is unable to establish a connection, you will receive a message stating that the connection is refused or that the connection has failed/timed out. When you receive the message connection has failed/timed out, it indicates that the request has never reached the end-system. This could be caused by a firewall, blocking the traffic between you and the end-system or it could be caused by a routing issue. This is displayed in the following output:

telnet 192.168.1.32 22 Connecting To 192.168.1.32… Could not open connection to the host, on port 22:Connect failed

517 517


The connection refused message indicates that there are no listener on that service port. This means that the service on that particular device is not running, thus it’s refusing the traffic. This is displayed in the following output:

telnet 10.10.15.10 22 Trying 10.10.15.10… telnet: connect to address 10.10.15.10: Connection refused Telnet is available on most systems and on Windows you will first have to enable it. How you do this may differ from the different version of Windows. Once the Telnet-Client has been enabled on your Windows client you can enable it by starting cmd.exe and typing the command telnet followed by the IP address and the port.

telnet [IP Address][Port] Example:

telnet 192.168.1.245 22 cURL cURL is also a very efficient tool when troubleshooting your application environment. cURL is a command line tool used to generate application traffic and it is usually pre-installed on many Linux distributions, but it can also be installed on Windows systems. cURL supports the following protocols: DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP. cURL is also by default installed on the BIG-IP system so it is for instance possible to generate client traffic from the BIG-IP system towards the pool members you are load balancing. One of the purposes I have used cURL for is generating specific HTTP GET requests containing specific HTTP headers to verify if an iRule is working correctly. It can also be used to troubleshoot HTTP monitors by generating the same GET request as the HTTP monitor to see what kind of data it returns. The following commands are some examples of what you can do: Command curl [URL] curl -s -D – [URL] -o /dev/null

Example curl www.example.com curl -s -D - www.example.com -o /dev/null

curl -o [filename] [URL]

curl -o mygettext.html www.example.com

curl -L [URL]

curl -L http://www.example.com

518 518

Description Download a single file Sends an HTTP GET request to the specified URL and return only the HTTP response headers. Will save the results of the command to the specified file. This causes cURL to follow the HTTP location headers when the client is being redirected. By default cURL will not follow HTTP redirects.


curl -u username:password [URL] curl –header “X-ForwardedFor: [IP]” [URL]

curl -u abc:123 http://www.example.com curl –header “X-Forwarded-For: 192.168.0.2” http://example.com

Will send user credentials to the specified URL. Inserts an X-Forwarded-For HTTP header into the HTTP GET request.

In the following picture, you can see how output from curl might look like:

There are multiple great functions you can perform using cURL and you can read more about the command and its different switches by reading the manual pages of the command.

Verifying the Processes on the BIG-IP device You may run into the scenario where you have access to the WebGUI but not SSH and vice versa. If you have access to either interface, you can use them to troubleshoot why either service is not running.

Verifying That the sshd Process is Running Using the WebGUI If you are having issues with accessing the BIG-IP device over SSH, but you have access to the WebGUI, then you can verify that the sshd process is running on the device. You can do this by doing the following: 1. 2. 3. 4. 5.

Log on to the BIG-IP device using the WebGUI In the Main tab go to System > Services In the list, go to sshd and verify that it is running. If it states openssh-daemon Stopped then check the sshd process and click Start. Try and access the BIG-IP device using SSH.

Verifying That the Web Processes is Running Using SSH There are actually two processes needed in order for the WebGUI to fully function. These are the httpd and tomcat processes. Httpd is the actual webservice running on the BIG-IP device and tomcat is the process that provides web server functions for the WebGUI. The tomcat process is an open-source implementation of Java Servlet and JavaServer Pages Technology. When the tomcat process is not running you will be able to access the webserver of the BIG-IP device, but it will only display the following:

519 519


If the tomcat process stops while logged on to the WebGUI you will lose the status of the device and each tab will only present the Configuration Utility restarting.. message. This is displayed in the following image:

In order to verify that both the httpd and tomcat process is running please use the following instructions: 1.

Log on to the BIG-IP system using SSH and enter the bash shell. This is indicated by the command prompt:

[root@bigip02:Licensed:Active:In Sync] ~ # 2.

Issue the following command to verify if the httpd process is running:

bigstart status httpd If running, the command should result into output similar to the following:

httpd (pid 32491) is running‌ 3.

Issue the following command to verify if the tomcat process is running:

520 520


bigstart status tomcat If running, the command should result into output similar to the following:

tomcat run (pid 1588) 3 seconds, 4 starts 4.

If one or both processes are not running then start them by issuing the following command [Optional]:

bigstart start [process name]

For example:

bigstart start tomcat bigstart start httpd The best practices of accessing the management interface of the BIG-IP device is using the management interface. However, in some scenarios you will have to enable management access using the Self-IP address of the BIG-IP device. When doing so, make sure that you completely lock down the access using packet filters and port lockdown. These features will be described later in this chapter.

Port Lockdown Previously in this book we have talked about the Self-IP addresses and if you did the lab section of the initial chapter you have also adjusted the Port Lockdown settings already. The Port Lockdown setting relates to Self-IP addresses which is used for the TMM switch interfaces, not the HMS management interface. This feature allows you to lockdown the ports and services that the Self-IP addresses accepts thus securing those interfaces from potential unwanted traffic. This restriction does not have anything to do with client traffic but rather management purposes. The recommendation is to always use the management port when administrating the BIG-IP system, but this may not always be possible and you need to allow management traffic on the Self-IP addresses instead. If you have deployed a BIG-IP DNS (formerly GTM) you will also have to allow iQuery (port 4353), otherwise it will not be able to synchronise virtual servers (Virtual Server Discovery) or perform health checks towards the BIG-IP devices which you have added to the BIG-IP DNS.

521 521


The Port Lockdown settings comes in five pre-defined settings which are: ▪

Allow Default – Only allow inbound connections using the protocols and local ports specified in the default list. This list will be different depending what version of TMOS you are using. The different lists are detailed later in this chapter.

Allow All – Allow any inbound connection, regardless of protocol or port.

Allow None (default) – Do not allow any connections directly to a Self-IP. However, ICMP Traffic is always allowed and if the BIG-IP systems are configured in an HA-pair the ports that are listed in the exception list is also allowed.

Allow Custom – Only allow inbound connections using the protocols and local ports specified in a custom list. However, ICMP Traffic is always allowed and if the BIG-IP systems are configured in an HA-pair the ports that are listed in the exception list is also allowed.

Allow Custom (Include Default) – Only allow inbound connections based on the protocols and local ports specified in a custom list and the Allow Default list. However, ICMP Traffic is always allowed and if the BIG-IP systems are configured in an HA-pair the ports that are listed in the exception list is also allowed.

The Allow Default setting allows the following protocols (you can display this list from the CLI using the [tmsh] list net self-allow defaults command): Allow Default List (v10.x - 11.x) Allowed Protocol OSPF TCP UDP TCP TCP UDP TCP TCP UDP UDP UDP

522 522

Service Port N/A 4353 4353 443 161 161 22 53 53 520 1026

Service Name N/A iQuery iQuery HTTPS SNMP SNMP SSH DNS DNS RIP Network failover


Allow Default List (v12.x - 13.x) Allowed Protocol IGMP OSPF PIM TCP UDP TCP TCP UDP TCP TCP UDP UDP UDP

Service Port N/A N/A N/A 4353 4353 443 161 161 22 53 53 520 1026

Service Name N/A N/A N/A iQuery iQuery HTTPS SNMP SNMP SSH DNS DNS RIP Network failover

But note it is stored in the configuration using port numbers not service names;

net self-allow { defaults { ospf:any tcp:161 tcp:22 tcp:4353 tcp:443 tcp:53 udp:1026 udp:161 udp:4353 udp:520 udp:53 } } Port Lockdown Exceptions Even though you configured the Port Lockdown setting to use Allow None or Allow Custom there are still some ports and protocols that are allowed. The exception list will differ depending on the current TMOS version:

523 523


Port Lockdown Exception List ▪

TCP Mirroring Ports: - When using HA pair configuration, the system will automatically allow certain TCP ports for connection and persistence mirroring regardless of the Port Lockdown setting. The number of ports used for this traffic has changed over time. Starting from v.11.4 the port numbers used will increment by one for each new traffic group and channel that is created. How many ports used is different in each version and is detailed in the following list: o o o

v11.0.x - v11.3.x – Port 1028 v11.4.x – v11.5.x – Port 1029 -1043 v11.6.x through 13.x – Port 1029 - 1155

iQuery Ports (v11.0 and above): - When using HA pair configuration, the BIG-IP systems will communicate with each other using the Centralised Management Infrastructure (CMI) using iQuery on TCP port 4353. It will do this regardless of the port lockdown settings.

ICMP: - ICMP traffic to the Self-IP addresses are not affected by the Port Lockdown settings and will be implicitly allowed in all cases.

Configuring Port Lockdown You have already come in contact with this before in previous labs, but in order to configure the Port Lockdown settings follow these steps: 1. 2. 3. 4. 5.

Log into the BIG-IP system using the WebGUI. Navigate to Network > Self IPs Click on the Self IP you wish to modify the Port Lockdown setting for. From the Port Lockdown box, select the setting you would like. When selecting Custom you will need to specify additional settings such as TCP, UDP, Protocol and the ports (all, none or specific port). When finished, click Update.

524 524


Restricting Access to the Management Port The function of Port Lockdown is to restrict what type of traffic that is allowed on the self-IP addresses. This means it will only restrict traffic on the interfaces connected to TMM. If you want to restrict access to the Management Port there are a few extra steps that you need to perform. First off, you can restrict SSH (CLI) access to the device by simply going to System > Platform and change the SSH IP Allow from All Addresses to Specify Range. Here you can define which IP address or range of IP addresses that can access the Management Port on SSH. If you would like to restrict access to the WebGUI you need to do this from the CLI of the BIG-IP system. In order to restrict the access to the WebGUI over https, use the following instructions: 1. 2.

Log on to the BIG-IP system using CLI Enter tmsh by typing the following command:

tmsh

525 525


3.

To add an IP address or range of IP addresses to the current allowed list, enter the following command:

modify /sys httpd allow add { [IP address or IP range] } It can be written in the following manners:

modify /sys httpd allow add { 192.168.1.0/255.255.255.0 } modify /sys httpd allow add { 192.168.1.10 192.168.1.11 } 4.

Verify that the addresses have been added to the allow list by entering the following command:

list /sys httpd allow 5.

Save the configuration by entering the following command:

save /sys config For more information see the AskF5 article K13309: Restricting access to the Configuration utility by source IP address (11.x - 13.x).

Packet Filters Packet Filters is a feature which lets you specify whether a BIG-IP system interface should accept or reject certain traffic based on criteria that you specify within the Packet Filter. To simplify it a bit, think of it as an L4 firewall or an Access Control List (ACL). The packet filter will only be enforced on incoming traffic and this applies only to the BIG-IP TMM interfaces. The criteria you can base the packet filter on includes: ▪ ▪ ▪

The source IP address of the packet. The destination IP address of the packet. The destination port of a packet.

526 526


The criteria you set will be saved within an expression and this expression will be automatically created when you save your packet filter. Therefore, if you wish to change it later on, you will have to change the expression. It uses the same expressions as tcpdump which is covered later in this book.

527 527


Keep in mind that packet filters are not the same as iRules.

When the rules are created they will be added to a rules list in a specific order. The rules will be evaluated and processed based on this order. That is why when you create the packet filter you will be able to select where in the list it should be added. You will only be able to select First or Last so it might not end up exactly where you want in the list. However, using the Change Order you will be able to freely move the packet filters as you wish. Packet Filtering is disabled by default but when you enable it you will receive additional settings that you can configure. Two important ones are: Unhandled Packet Action This specifies how the BIG-IP system should handle traffic that does not match the Packet Filter rules. Either Accept (allow all), Discard (deny all) or Reject (deny all). The last two will both deny the traffic, but their actions are quite different. When selecting Discard the BIG-IP system will silently discard the packet. With Reject, the BIG-IP system will send back an RST (reset) packet back to the client. For security reasons, it is best to select Discard in order to prevent malicious users to scan your network and find devices which they can attack. The setting you select here depends on which security model you are using. If you select Accept (allow all), then you should define packet filter rules that disallow traffic. In other words, this is the traffic you would like to block, the rest should be allowed. This security model is using a Negative Security Model. If you select Discard or Reject (deny all), then you should define packet filter rules that allow traffic. Here you define exactly what type of traffic that should be allowed and the rest is blocked. This security model is using a Positive Security Model.

528 528


We covered Positive and Negative security models in the 101 Application Delivery Fundamentals Study Guide. Send ICMP Error on Packet Reject This setting is by default disabled but if you enable it, it causes the BIG-IP system to send back an ICMP type 3 (destination unreachable) with code 13 (administratively prohibited) when an ingress packet is rejected. When you enable this, the BIG-IP system will share more information than it does by default which from a security standpoint is not recommended. This states that the service has been configured to be blocked by an administrator. When disabled it will send back an ICMP reject packet that is protocol dependent.

Exemptions It is also possible to configure exemptions of traffic that should not be evaluated against the packet filter rules. This exemptions list will be evaluated before the packet filters and can therefore not be overridden by a packet filter rule. The criteria you can define here is: ▪ ▪ ▪

MAC Address IP Address VLANs

529 529


Creating Packet Filter Rules Before you can create packet filter rules you will first have to enable packet filtering. Enabling Packet Filtering 1. 2. 3. 4. 5.

Log into the BIG-IP system using the WebGUI. Navigate to Network > Packet Filters. From the Packet Filtering List, select Enabled. From the Unhandled Packet Action list, select the action you wish the BIG-IP should take when traffic is not hit by a packet filter rule. When done, click Update.

Creating a Packet Filter Rule 1. 2. 3. 4. 5. 6. 7. 8. 9.

Navigate to Network > Packet Filters. Click on Rules. Click on Create. Name your packet filter rule From the Order List, select the order you wish to place the rule. From the Action list, select the action you want. From the VLAN list, select the VLAN you wish to apply the packet filter to. In the Filter Expression section, select the filters you would like the traffic to match in order for the packet filter to be evaluated. When done, click Finished.

Reordering Packet Filter Rules Sometimes you need to reorder your packet filter in order for it to match the traffic. To reorder the packet filter rule, use the following instructions: 1. 2. 3. 4. 5.

Navigate to Network > Packet Filters. Click on Rules. Click on Change Order.. Select the packet filter rule you wish to move and click on either Up, Down, First, or Last. When done, click Finished.

530 530


Logging of Packet Filter Rules Previously in this book we discussed log files. Packet Filtering rules have their own log file where the decision of a packet filter is logged. However, it will only log if packet filtering is enabled and if you have configured logging on the packet filtering rule. The log entries are written to the file /var/log/pktfilter. You can view this log either through the WebGUI by navigating to System > Logs > Packet Filter or through CLI by running the command cat /var/log/pktfilter. In the following picture is output from the pktfilter in CLI:

Due to the lack of space on the page, the above log output is just a snippet and not the complete output. The log output will contain the name of the packet filter rule, the action (decision), what VLAN the traffic arrived on, source/destination IP address and source/destination port.

Troubleshooting DNS Settings On a BIG-IP device, you have the possibility to perform name resolution. In order for the BIG-IP device to connect to a system using an FQDN you must either configure a DNS server or modify the hosts file on the device. This section will focus on what tools you can use to test and troubleshoot why you are unable to perform name resolution.

Verify the DNS Configuration Like mentioned earlier in this section, there are two sources where the BIG-IP device can retrieve information to resolve FQDN names. These are the hosts file (located at /etc/hosts) and DNS. The BIG-IP will first try and find the record in the hosts file and if it is not found it will use the DNS. To troubleshoot the DNS, verify that the hosts file has not been altered by logging on to the BIG-IP device using CLI and issue the following command:

cat /etc/hosts If the output does not contain any specific entries, you can be sure that the problem is not caused by the hosts file. After that you should verify that the DNS configuration is correct. To verify this, use the following instructions: 1. 2. 3.

Log on to the BIG-IP WebUI Go to System > Configuration > Device > DNS Under DNS Lookup Server List, verify that it contains the correct DNS servers for your environment.

531 531


Tools Available for Troubleshooting DNS In order to verify that the DNS servers specified are working as they should, you can simply send them a DNS query and see if they are responding with the correct result. There are two tools in order do this and they are described in the following section.

nslookup Nslookup is command line based tool that is present on both Windows operating systems and the BIG-IP device. In order to run it on your Windows machine you just need to launch the cmd.exe and run the command nslookup and you have entered the application. This is also known as the Interactive Mode. However, nslookup can also be used in non-interactive mode where you can just use the command nslookup followed by the FQDN or IP address you would like to lookup. An example of this would be: nslookup www.cnn.com The result will be the following:

[root@bigip02:Licensed:Active:In Sync] config # nslookup www.cnn.com Server: 8.8.8.8 Address: 8.8.8.8#53 Non-authoritative answer: www.cnn.com canonical name = turner.map.fastly.net. Name: turner.map.fastly.net Address: 23.235.43.73 You can see that the address we received was 23.235.43.73 and the DNS query was successful. This is very easy to use if you only have one FQDN that you wish to test. When entering nslookup in Interactive Mode you have more options you can use to troubleshoot DNS. You can specify what server you would like to query, what type of records you would like to view and turn on debugging. Here is a small list of some parameters that you can use with the command: Parameter set type=[resource type] set debug set nodebug set d2 set nod2 set server [server IP] set port [port number]

532 532

Description Changes the resource record type. Some examples are: mx, ns, cname, ptr and soa. Turns debugging mode on. Turns debugging mode off. Turn exhaustive debugging mode on. This debugging mode will list all fields of every packet. Turn exhaustive debugging off Changes the default server to the value specified Changes the default TCP/UDP port to the value specified


set timeout=[value] set retry=[value] help exit

Changes the initial timeout value (in seconds) to wait for a reply to a request. Changes the initial number of retries the application will make. Displays a short summary of all nslookup subcommands. Exits nslookup

When the debug mode is turned on you will be presented with the following output:

[root@bigip02:Licensed:Active:In Sync] config # nslookup set debug on www.cnn.com Server: 8.8.8.8 Address: 8.8.8.8#53 QUESTIONS: www.cnn.com, type = A, class = IN ANSWERS: -> www.cnn.com canonical name = turner.map.fastly.net. ttl = 138 -> turner.map.fastly.net internet address = 185.31.17.73 ttl = 2 AUTHORITY RECORDS: ADDITIONAL RECORDS: Non-authoritative answer: www.cnn.com canonical name = turner.map.fastly.net. Name: turner.map.fastly.net Address: 185.31.17.73 Common Error Messages When you are troubleshooting DNS using the nslookup tool, there are several different error messages that you might receive. In the following list we have compiled the most common errors: Timed Out: You will be presented with this error when the DNS server has not responded within the timeout period. You can troubleshoot this further by changing the timeout and retry settings by issuing the following parameters: set timeout=[value] or set retry=[value] No Response from Server: You might see this error if the specified host is not running any DNS services. No Records: You will receive this response if the DNS service is running on the host and the FQDN is correct, however the DNS server cannot find any records for that resource type. You can change the resource type by issuing the command: set type=[resource type].

533 533


Non-existent Domain: You will receive this message if the host name or FQDN does not exist. Connection Refused/Network Is Unreachable: You will receive this message if the DNS query was unable to reach the DNS server. This can be caused by a firewall blocking the traffic or if the traffic was lost on its way to the DNS server, which can be caused by a misconfigured route entry. Server Failure: You will receive this message if the DNS server has experienced an internal database issue and was unable to provide an answer to the DNS query. Refused: You will receive this message if the DNS server refused to reply to the DNS query. Format Error: You will receive this message if the DNS server received the DNS query successfully but was unable to understand the request, because it was in a different format. If you receive this messagemessage, the problem is more likely with nslookup rather than the DNS server. In order to run nslookup on the BIG-IP device you will have to be logged on to the Linux operating system also known as Host Management Subsystem (HMS) which is indicated by the command prompt

[root@bigip02:Licensed:Active:In Sync] config #

dig The application dig is also a DNS name server query tool that is also based on command-line. The source code of dig is part of the larger BIND distribution, which is a DNS server developed by the University of California Berkeley (UCB). BIND stands for Berkeley Internet Name Domain and is one of the most widely used DNS servers on the Internet. The simplest query you can do with dig is a single FQDN or IP address. Again, if we use www.cnn.com as an example we receive the following output:

534 534


[root@bigip02:Licensed:Active:In Sync] config # dig www.cnn.com ; <<>> DiG 9.9.5 <<>> www.cnn.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1811 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;www.cnn.com. IN A ;; ANSWER SECTION: www.cnn.com. 95 IN CNAME turner.map.fastly.net. turner.map.fastly.net. 15 IN A 23.235.43.73 ;; ;; ;; ;;

Query time: 21 msec SERVER: 8.8.8.8#53(8.8.8.8) WHEN: Tue Feb 16 20:21:27 CET 2016 MSG SIZE rcvd: 91

As you can see, the standard reply from dig is very verbose and looks like the debug mode you can use with nslookup. Let us break up the entire reply and review each section.

; <<>> DiG 9.9.5 <<>> www.cnn.com ;; global options: +cmd In this section, we can see what version of dig we are using and what global options that are turned on.

;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16314 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 This section gives us some technical details regarding the answer that was received from the DNS server. It can be turned on and off using the +[no]comments option. However, be aware that this also turns off many other section headers.

;; QUESTION SECTION: ;www.cnn.com. IN A In the question section we, receive a reminder of what query we sent to the DNS server.

535 535


;; ANSWER SECTION: www.cnn.com. 95 IN CNAME turner.map.fastly.net. turner.map.fastly.net. 15 IN A 23.235.43.73 In the answer section, we finally receive the answer toof our DNS query. We can first see that www.cnn.com resolves to a CNAME called turner.map.fastly.net. which resolves to 23.235.43.73.

;; ;; ;; ;;

Query time: 21 msec SERVER: 8.8.8.8#53(8.8.8.8) WHEN: Tue Feb 16 20:21:27 CET 2016 MSG SIZE rcvd: 91

The final section of the output tells us how long it took to receive the reply, what server and port we sent our request to, when we did it and the size of the message.

Changing Resource Types dig is a non-interactive application which means that all the parameters that you want to use need to be in the query from the start. Dig can perform DNS queries on all available resource types including A, MX, NS, TXT, SOA, CNAME, PTR etc. If you want to receive all resource types, you can use the any parameter like the following example:

dig any cnn.com Limiting the Output Since the default answer from dig is very detailed you can add parameters that will shorten the answer. In order to do this please use the following example:

[root@bigip02:Licensed:Active:In Sync] config # dig www.cnn.com +short turner.map.fastly.net. 23.235.43.73 This answer is very limited and if you still want a detailed answer but without all of the extra information you will need to first turn off all answers. You can do this by issuing the parameter +noall and then you can turn on all the answers that you would like. In order to receive a detailed, yet short answer please issue the following command:

[root@bigip02:Licensed:Active:In Sync] config # dig www.cnn.com +noall +answer ; <<>> DiG 9.9.5 <<>> www.cnn.com +noall +answer ;; global options: +cmd www.cnn.com. 249 IN CNAME turner.map.fastly.net. turner.map.fastly.net. 27 IN A 185.31.17.73

536 536


Perform Reverse Lookups In order to perform a reverse lookup, you will have to add the switch –x. To perform a reverse lookup please issue the following command:

[root@bigip02:Licensed:Active:In Sync] config # dig -x 8.8.8.8 +noall +answer ; <<>> DiG 9.9.5 <<>> -x 8.8.8.8 +noall +answer ;; global options: +cmd 8.8.8.8.in-addr.arpa. 21413 IN PTR google-public-dns-a.google.com. Query Another DNS Server If you would like to query another DNS server you can do so by specifying this in the command using a @ symbol followed by the DNS server. To perform a DNS query towards a different DNS server please issue the following command:

[root@bigip02:Licensed:Active:In Sync] config # dig @195.67.199.39 www.cnn.com +noall +answer +stats ; <<>> DiG 9.9.5 <<>> @195.67.199.39 www.cnn.com +noall +answer +stats ; (1 server found) ;; global options: +cmd www.cnn.com. 284 IN CNAME turner.map.fastly.net. turner.map.fastly.net. 0 IN A 23.235.43.73 ;; Query time: 9 msec ;; SERVER: 195.67.199.39#53(195.67.199.39) ;; WHEN: Tue Feb 16 21:07:32 CET 2016 ;; MSG SIZE rcvd: 91 Performing Multiple Lookups It is also possible to perform multiple lookups of hostnames and IP addresses by adding them all to a file (one name per line). When you have collected all of the hostnames you can use the switch –f to ask dig to perform lookups for all of the names contained in that file. This is displayed in the following output:

[root@bigip02:ModuleNotLicensed:Active:In Sync] root # dig -f host-list.txt +noall +answer www.cnn.com. 108 IN CNAME turner.map.fastly.net. turner.map.fastly.net. 21 IN A 185.31.17.73 www.bbc.com. 23 IN CNAME www-bbc-com.bbc.net.uk. www-bbc-com.bbc.net.uk. 23 IN CNAME bbc.map.fastly.net. bbc.map.fastly.net. 17 IN A 185.31.17.81 dig Parameters In the following list, we have compiled all common dig parameters that might be useful when troubleshooting DNS. You can also display all of the possible parameters by issuing the command dig –h.

537 537


Parameter +[no]all +[no]answer +[no]stats +time=[value] +retry=[value] -f filename -x [IP-Address] @[dns server]

Description Turns on or off all display flags. Turns on or off the answer flag. Turns on or off the statistics flag. Changes the default timeout period to the value specified. Changes the default retry period to the value specified. Performs a query of all names contained in that file. Perform reverse lookups. Changes the default DNS server to the DNS server specified.

In order to run dig on the BIG-IP device you will have to be logged on to the Linux operating system also known as Host Management Subsystem (HMS) which is indicated by the command prompt.

[root@bigip02:Licensed:Active:In Sync] config #

Remote Authentication Introduction The BIG-IP device has the ability to use remote authentication, which separates an application from using the underlying authentication technology. This technology is referred to as Pluggable Authentication Module (PAM) and it is a collection of multiple different authentication technologies such as Lightweight Directory Access Protocol (LDAP), Remote Authentication Dial-In User Service (RADIUS) and TACACS+. These technologies are referred to as authentication modules. We’ll discuss these technologies in the upcoming section.

The LDAP Authentication Module This authentication module is useful when your authentication information is stored on a remote LDAP or Active Directory server and you want the clients to authenticate using basic HTTP (in other words, user name and password). This module will assist with authenticating users passing through the BIG-IP device and has the ability to indicate if the authentication was a success or a failure. You can even handle LDAP traffic using iRules where you can configure the BIG-IP device to return specific data in an LDAP response using the following commands: ▪ ▪ ▪

AUTH::subscribe AUTH::unsubscribe AUTH::response_data

The RADIUS Authentication Module The RADIUS authentication module can also be used for authenticating users passing through the BIG-IP device using basic HTTP. For this technology, the authentication information is instead stored on a RADIUS server.

538 538


The TACACS+ Authentication Module Like the previous technologies, the TACACS+ authentication module will also authenticate users passing through the BIG-IP device using basic HTTP. The difference is that the authentication information is instead stored on a TACACS+ server.

The SSL Client Certificate LDAP Authentication Module This technology enables you to perform certificate based authentication. It also has the ability to perform authentication based on users and groups. Like the LDAP authentication module, the authentication will be performed on traffic passing through the BIG-IP device. The SSL Client Certificate LDAP module can use two different types of credentials, these are: ▪ ▪

SSL Certificates Groups and roles

First the BIG-IP device will use the certificate to authenticate the users but to increase security it can also verify that you are part of the correct group and role. When using certificates to authenticate the clients, the system will be able to use the following methods: Usernames - If you do not store certificates in the LDAP database, the BIG-IP device will be able to extract the username contained within the user’s certificate. Then the LDAP server can verify that the user is indeed in the LDAP database. This is a good method for the organisations with their own PKI (Public Key Infrastructure) that are certain if the certificate is valid then the user is valid. Certificate Map - If you configure the LDAP server to map certificates against users, you can have the LDAP server search its database and retrieve a username based on the certificate being presented to the LDAP server. When the system has found the specified user, it can verify that it is indeed a valid user. Certificate - It is also possible to configure your LDAP server to incorporate certificates into the user specific information stored in the LDAP database. Using this method, the LDAP server can compare the presented certificate with the one being stored in the LDAP database associated with the user. If there is a match of the certificate, then the user is valid.

The SSL OCSP Authentication Module This technology is also a certificate based authentication module used for authenticating traffic passing through the BIG-IP device. The BIG-IP will verify that the certificate being presented by the client is actually valid by acting as an Online Certificate Status Protocol (OCSP) agent. The BIG-IP will send the certificate to a configured OCSP server that will check if the certificate is valid or revoked. Traditionally certificates are verified using a CRL (Certificate Revocation List) but these are only updated during certain intervals whereas OCSP has live data and are therefore in some scenarios more up to date. You can enable the BIGIP to use both the CRL and OCSP but be careful as this requires the certificate to be valid on both the CRL and the OCSP.

539 539


The CRLDP Authentication Module This technology is also used for authenticating traffic passing through the BIG-IP device and is very similar to the SSL OCSP Authentication Module. Instead of using OCSP and CRL it uses a technology called Certificate Revocation List Distribution Points (CRLDP). CRLDP is used to build an entire network of CRLs known as CRL distribution points that can be used to verify a certificate.

The Kerberos Delegation Authentication Module The Kerberos Delegation Authentication Module is also used for authenticating traffic passing through the BIG-IP device. This module uses the Microsoft Windows Integrated Authentication technology to authenticate the clients. This module will act as a proxy for the Kerberos credentials and when a client is trying to access a server within the domain the web browser will fetch the Kerberos credentials. These are known as delegated credentials and are sent to the BIG-IP device who retrieves the credentials for the real server that the client is trying to access. Once the BIG-IP device has received the credentials it will send those back to the client. Authentication traffic is normally routed through a Traffic Management Microkernel (TMM) interface. This means that the traffic will be routed through an interface associated with a self-IP and VLAN instead of the management interface. Therefore, the remote authentication will not work if the TMM process has been stopped.

The Network Time Protocol (NTP) Time is a very important aspect when it comes to IT equipment, no matter if you talk about network devices, active directory, or other applications. This is because they require the time to be the same across all devices and applications in order for numerous functions to work. VPN tunnels need to have the same time between the two endpoints negotiating the tunnel, because the re-keying of the tunnel happens at a specific time. If one of the devices are 10 minutes faster, then it will try to negotiate before the other which will not work. F5 BIG-IP also have a number of functions that depend on synchronised time and date in order to work.. That is why NTP is one of the configuration settings that you configure during the initial setup. Network Time Protocol (NTP) is a networking protocol designed to synchronise the clock between IT equipment and it was designed in 1985. The protocol uses an intersection algorithm that is based upon a modified version of Marzullo’s algorithm and it is so accurate that it can keep all participating hosts synchronised within a few milliseconds. When the BIG-IP system have unsynchronised clocks, you might experience authentication issues between BIG-IP APM and Active Directory™ and problems with ConfigSync just to name a few.

540 540


Configuring an NTP Server To configure an NTP server on your BIG-IP system, use the following instructions: 1. 2. 3. 4. 5. 6.

Log on to the WebGUI Navigate to System > Configuration > Device Click on NTP In the Properties area, type the IP address of the NTP server in the Address field and click Add. Repeat step 4 until you have added all of your NTP servers. When finished, click Update.

Troubleshooting NTP If you experience problems with the date, time or if your BIG-IP is in the wrong timezone, this could mean that the time is not synchronised correctly. When this happens, the first step is always to verify your NTP configuration. If the configuration settings are correct, proceed with the following instructions.

Verifying the NTP daemon service 1. 2.

Log on to the CLI of the BIG-IP system. To verify the status of the NTP daemon, enter the following command:

# tmsh show /sys service ntpd 3.

If the output states: ntpd is stopped, enter the following command:

# tmsh start /sys service ntpd 4.

If the NTP daemon requires a restart, enter the following command:

# tmsh restart /sys service ntpd 5.

Exit the CLI by entering the command:

# exit

Verifying the Communication Between the BIG-IP System and the NTP Peer Server To verify the communication between the BIG-IP system and the NTP server you can use a utility named ntpq. This is an application that needs to be run in bash.

541 541


When used, ntpq will print out many different fields which can be quite tough to interpret. Instead of going through each field we’ll simply show an example of a successful NTP peer query. 1. 2.

Log on to the CLI of the BIG-IP system. Verify the NTP communication by entering the command

# ntpq -np If the NTP communication is successful the output will look something like this:

[root@bigip1:Active:Standalone] config # ntpq -np remote refid st t when poll reach delay offset jitter ============================================================================== +193.11.166.2 .PPS. 1 u 4 64 377 21.589 -1.397 1.760 -193.11.166.18 .PPS. 1 u 65 64 377 18.279 -2.387 1.448 +193.11.166.36 .PPS. 1 u 64 377 14.323 -2.973 1.478 *193.11.166.52 .PPS. 1 u 2 64 377 14.133 -1.561 1.190 In this output, what we are looking for is the reach field. In our case the value is 377 which indicates that the last eight attempts to communicate with the NTP peer was successful. The when statement displays when the last response was received from the NTP peer. For instance, for the NTP peer with the address of 193.11.166.2 we received the last reply 4 seconds ago. The delay, offset and jitter fields displays the following: Field delay offset jitter

Description This field is the current estimated delay; the transit time between these peers in milliseconds. This field is the current estimated offset; the time difference between these peers in milliseconds. This field is the current estimated dispersion; the variation in delay between these peers in milliseconds.

If the NTP communication is unsuccessful the output will look something like this:

[root@bigip1:Active:Standalone] config # ntpq -np remote refid st t when poll reach delay offset jitter ============================================================================== 193.11.166.2 .PPS. 1 u 622 64 0 21.261 0.680 0.480 193.11.166.18 .PPS. 1 u 619 64 0 26.870 -0.563 2.139 193.11.166.36 .PPS. 1 u 619 64 0 13.309 0.521 0.717 193.11.166.52 .PPS. 1 u 620 64 0 12.557 0.994 0.944 If you compare the previous output, you can now see that the reach field is down to zero (0), meaning that the last eight attempts to the NTP peer have failed. This means that we are somehow unable to reach the NTP peer. The when field also indicate that we have not received a reply from the NTP peer since 600~ seconds.

542 542


For more information about the ntpq utility visit the following solution article: K10240: Verifying NTP peer server communications.

Verifying the Network Connectivity to the NTP Peer Server If you are still experiencing issues with NTP, if there is a firewall between the BIG-IP system and the NTP peer, verify that the traffic is not blocked. If you are using locally managed NTP servers, ensure they are working properly. Lastly, verify the network reachability to the NTP peer by verifying the routing towards the NTP server.

Chapter Summary ▪

There are multiple different scenarios that may hinder the access to the management interface. It could be caused by a misconfiguration, firewall blocking the traffic, problems on the client or even the BIG-IP system might be turned off due to a hardware failure.

In order to verify that the DNS servers specified are working as they should you can simply send them a DNS query and see if they are responding with the correct result.

The BIG-IP device has the ability to use remote authentication which separates an application from using the underlying authentication technology. This technology is referred to as Pluggable Authentication Module (PAM) and it is a collection of multiple different authentication technologies such as Lightweight Directory Access Protocol (LDAP), Remote Authentication Dial-In User Service (RADIUS) and TACACS+.

Time is a very important aspect when it comes to IT equipment, no matter if you talk about network devices, active directory, or other applications. This is because they require the time to be the same across all devices and applications in order for numerous functions to work.

Chapter Review 1. What are the advantages of using Telnet as a troubleshooting tool? a. b. c. d.

It can help you diagnose the path (route) along with measuring the delays between each hop on the way to an IP host. It verifies if a particular host is reachable and it will also measure the round-trip time. Gives you the possibility to verify if a service is running. It gives you the possibility to verify the content of an application.

2. What tool can be used to generate traffic containing application requests? a. b. c. d.

Ping Traceroute Tracert cURL

543 543


3. What is the purpose of the Port Lockdown feature? a. b. c. d.

Controlling the traffic being sent to the Self-IP addresses. Controlling the traffic being sent to the Management Port. Controlling which devices that can be connected to the BIG-IP system’s ports. Controlling which TCP/UDP ports a virtual server should accept traffic on.

4. Which protocol uses the port number TCP:4353/UDP:4353 a. b. c. d.

SNMP iQuery DNS RIP

5. To which file will the BIG-IP system write log entries for Packet Filtering? a. b. c. d.

/var/log/pktfilter /var/log/ltm /var/log/afm /var/log/asm

6. Which tools can be used to send DNS queries? (Choose two) a. b. c. d. e. f.

ping iquery dig traceroute telnet nslookup

544 544


Chapter Review: Answers 1. What are the advantages of using Telnet as a troubleshooting tool? a. b. c. d.

It can help you diagnose the path (route) along with measuring the delays between each hop on the way to an IP host. It verifies if a particular host is reachable and it will also measure the round-trip time. Gives you the possibility to verify if a service is running. It gives you the possibility to verify the content of an application.

The correct answer is: C The great thing about telnet is that it operates at the application level and gives us the advantage to verify if a particular service is running. 2. What tool can be used to generate traffic containing application requests? a. b. c. d.

Ping Traceroute Tracert cURL

The correct answer is: D cURL is a command line tool used to generate application traffic that can be used to verify if an iRule is working correctly by generating a specific HTTP GET request containing specific HTTP headers.

545 545


3. What is the purpose of the Port Lockdown feature? a. b. c. d.

Controlling the traffic being sent to the Self-IP addresses. Controlling the traffic being sent to the Management Port. Controlling which devices that can be connected to the BIG-IP system’s ports. Controlling which TCP/UDP ports a virtual server should accept traffic on.

The correct answer is: a The Port Lockdown setting relates to Self-IP addresses which is used for the TMM switch interfaces, not the HMS management interface. This feature allows you to lockdown the ports and services that the Self-IP addresses accepts thus securing those interfaces from potential unwanted traffic. This restriction does not have anything to do with client traffic but rather management purposes. 4. Which protocol uses the port number TCP:4353/UDP:4353 a. b. c. d.

SNMP iQuery DNS RIP

The correct answer is: b 5. To which file will the BIG-IP system write log entries for Packet Filtering? a. b. c. d.

/var/log/pktfilter /var/log/ltm /var/log/afm /var/log/asm

The correct answer is: a Packet Filtering rules have their own log file where the decision of a packet filter is logged. However, it will only log if packet filtering is enabled and if you have configured logging on the packet filtering rule. The log entries are written to the file /var/log/pktfilter. 6. Which tools can be used to send DNS queries? (Choose two) a. b. c. d. e. f.

ping iquery dig traceroute telnet nslookup

The correct answer is: c and f

546 546


18. Troubleshooting and Managing Local Traffic As most in the networking field and beyond will attest, troubleshooting even basic routing and switching issues can be difficult. Dealing with a distributed, multi-protocol system (the collection of devices that comprise a network) is an order of magnitude more complex than dealing with a single server. Whilst this chapter focuses on troubleshooting connectivity at the point where an F5 is involved with a traffic flow, do keep in mind that the wider network infrastructure cannot be ignored. An F5 also provides and supports rather more features and protocols than a typical router and typically operates at all layers of the OSI model. The interaction between features at different OSI model layers is very important and an open mind and a good understanding of those interactions is essential.

Traffic Processing Order The first step in resolving a connectivity (rather than management) issue is understanding the order in which various features are applied to traffic and then what traffic processing listener eventually handles that traffic. Listeners can be Virtual Servers, SNATs or NATs. It’ll probably help if you split the tasks performed into three general functional areas. Firstly, there are a number of control plane functions, such as ARP and routing (from an exam perspective you can pretty much ignore these). Second are what I consider basic networking functions such as packet filtering. Lastly more complex traffic manipulation functions such as load balancing and NAT. Of course, when the F5 acts as a full proxy (as in many cases), the control plane and basic networking functions occur or are relied upon twice: once on the client side and again on the server side. Outbound routing also occurs. To keep things relatively simple at this point, these facts will be ignored, as will the possible impact of iRules.

Control Plane Functions These don’t directly relate to BIG-IP, on the client side at least, but it’s well worth keeping these in mind for the real world. Traffic wouldn’t arrive at your device unless it was replying to ARP requests and suitable static or dynamic routes on surrounding network devices exist. Server-side ARP is also important and routes are required on the BIG-IP itself.

Packet Processing Order Assuming any control plane functions are operating normally, we should consider how packets are handled once they arrive. This isn’t as simple as it seems, we can’t just follow the OSI model here, from layer 1 to 7, although it does provide some useful guidance. The exam assumes Interfaces, Trunks, Tunnels, VLANs and Self IPs are configured and operating correctly. Also, Bandwidth Controllers, Rate Classes, Route Domains, Connection Limits, Auto Last Hop, VLAN-Keyed Connections and SNAT Packet Forwarding settings are all ignored. Almost all potential packet handling functions are, as I’m sure you know, highly configurable. Keep that in mind particularly where a feature can be enabled on a per VLAN or Tunnel basis and NAT of some kind is involved. Anyway, enough talk, here’s the order of operations once a packet arrives, with some detail on what configuration may limit their impact;

547 547


Packet Filtering o Enabled or not? o Are rules applied to the relevant VLAN or Tunnel? o Is the Unhandled Packet Action accept, discard or reject? o Are there MAC address, VLAN or IP address exemptions? DDoS Functions o Are packets malformed in some way? o Have SYN cookies been activated due to high connection rate? SYN Cookie protection is a standard feature that the BIG-IP system can use to protect itself from a SYN Flood attack. The feature is on standby until it reaches a certain threshold, either on a specific virtual server or the BIG-IP system itself. When a client sends a request to a BIG-IP system that has activated SYN Cookie protection, the BIG-IP will send back a SYN-ACK containing an encoded secret with the connection information that is otherwise stored in the connection table. Once the BIG-IP has sent this SYN-ACK containing the SYN Cookie, it will remove the connection from its connection table. If the client is legitimate, it will respond back to the BIG-IP with the SYN Cookie and the BIG-IP can rebuild the connection and start processing traffic. It does this in order to prevent the connection table from becoming full and overloading the system.

Established Connections o Packets processed based on the connection table entry information and other features such as Persistence.

New Connections o Is there a listener for the destination? See the next section for details on how one is selected. o Is it enabled on the relevant VLAN or Tunnel? o Is the source address allowed? o Is the relevant network layer protocol configured? o Is address and port translation enabled? Established Connection processing assumes there has been no change in Listener status and that if the Packet Filter ‘Filter established connections’ setting is enabled, no changes have been made to Packet Filter Rules. If this isn’t the case, a connection may pass through the New Connections logic.

Listener Processing Order There are multiple ways for traffic to be accepted by the BIG-IP system. It can either be through an already established connection (matches an existing connection in the connection table) or through a Listener which we have covered previously in this book.

548 548


When the BIG-IP system receives traffic, it handles it in the following order: 1. 2. 3. 4. 5. 6. 7.

Existing connection in connection table - Is the connection already present in the connection table? Packet filter rule - Is the traffic allowed by the Packet Filter? Virtual server SNAT NAT Self-IP Drop

Where Virtual Servers are concerned, one is matched based on this order of priority: ▪ ▪ ▪ ▪ ▪ ▪

IP Address:Service Port IP Address:* IP Network:Service Port IP Network:* *:Service Port *:*

Here’s a few examples, where the inbound traffic has a destination IP address of 10.11.12.55 and TCP port of 6488. First match wins, so later matches would only occur if the prior higher priority Virtual Server(s) didn’t exist. ▪ ▪ ▪ ▪ ▪ ▪

10.11.12.55:6488 10.11.12.55:* 10.11.12.0:6488 10.11.12.0:* *:6488 *:*

Refer to article K14800: Order of precedence for virtual server matching (11.3.0 and later) for more detail on this subject. If a matching Virtual Server is down or disabled, any available lower priority Virtual Server is not selected. You can change this behavior by setting the BigDB TM.ContinueMatching variable to enabled. If the traffic is not matched to a Virtual Server, matching against SNATs and NATs occurs, as follow: ▪ ▪ ▪

The most specific origin (original source address or network) of multiple SNAT objects will be used. The most specific origin (original source address or network) of multiple NAT objects will be used. A specific (not wildcard or default) SNAT will take precedence over a specific NAT object.

If traffic matches both a Destination (Virtual Server, SNAT or NAT) and a Source Listener (SNAT, NAT), the Destination Listener takes precedence.

549 549


Managing & Troubleshooting Virtual Servers & Pools This section of the book is very scenario based and in order for you as a BIG-IP administrator to fully understand and know how you should manage and troubleshoot both the virtual servers and the pools, you will need experience working with the product. Let the following sections be a guideline on how you should administrate the BIG-IP system.

Managing Virtual Servers When working as a BIG-IP administrator you will face many different setups and applications. It is also not unusual that you will receive the configuration requests from individuals who have no idea on how the application works and communicates. Since the BIG-IP system has masses of ways in which it can be configured, it is essential to know how the application works and communicates. What protocols does it use? Should the virtual server use SSL termination or does the SSL session have to be terminated on the end-server? Does the application require persistence? If so, how should we persist the client connections? Also, asking these questions to the individual who ordered the virtual server may not provide you with the information you need to configure it correctly. In those cases, it is much better to speak with the application team who administrates the end-servers so that the configuration on the BIG-IP matches the application. All of the questions you might ask boils down to everything we have discussed in this book in regards to configuration. To summarise everything, there are a few guidelines you can use to determine the best configuration for the application.

What Protocols Does the Application Use? Does the application use UDP or TCP? If the virtual server is using a TCP profile while the pool-member is communicating over UDP, the application will not work at all. Therefore, it is important to receive this information. Is the application using any layer 7 protocols such as FTP or HTTP? Even though a standard HTTP virtual server with only a TCP profile (no layer 7 profile) would work, some functions may be dependent on the virtual server understanding the HTTP protocol. If no HTTP profile is assigned to the virtual server then it will not understand HTTP traffic. It would just simply understand the traffic passing through up to the transport layer (layer 4). One of the functions that is dependent on the HTTP profile is cookie persistence. This is because the cookie is set using an HTTP header and the only way the BIG-IP system can read and process the cookie is to understand the HTTP protocol. That is why when you try to enable cookie persistence and you do not have a HTTP profile assigned, the system will immediately prompt an error. Another function which is dependent on the HTTP profile are iRules which trigger on events that are HTTP specific such as HTTP_REQUEST and HTTP_RESPONSE. You cannot apply such an iRule to a virtual server which does not have an HTTP profile assigned.

550 550


On What VLAN Will the Client Access the Application? Under the configurations tab on a virtual server you have the option called VLAN and Tunnel Traffic. The BIG-IP system is a default deny device meaning it will only accept traffic that has been specifically configured and allowed through the device. The VLAN and Tunnel Traffic option defines on what VLAN and Tunnel the virtual server should “listen� and take in traffic. Therefore, it is essential that you know where the client will access the virtual server, otherwise you open up to a lot of security vulnerabilities because the default option is to listen on all VLANS and Tunnels. In order to give you an example, in the following situation we have a HTTP virtual server with three pool members. These three pool members depend on data which is stored on SQL servers which is also load balanced through the BIG-IP system using a different virtual server. All of this traffic should be go through a firewall in order to increase security. This means that the traffic to the virtual server has to enter from the server_net VLAN in order for this to work. Presently we have configured the BIG-IP system to receive traffic on all VLANs and it is all displayed in the following diagram:

551 551


Since the BIG-IP system has a connection to the webservers through the dmz VLAN it means they also have a direct connection to the self-IP address of the BIG-IP. If we were to infiltrate one of the webservers we could easily assign a static route pointing 10.10.15.250 to the BIG-IPs self-IP address of the dmz VLAN. This would cause the traffic to go directly to the F5 instead of going through the firewall. When the traffic is sent to the BIG-IP system’s self-IP address on the dmz VLAN, it will take in the traffic and process it since the virtual server listens on all VLANs. In order to solve this, we need to configure the virtual server to only listen on server_net VLAN. This is displayed in the following diagram:

As you can see from the previous scenario, it is very important to lock down the virtual server to only listen where it is necessary.

552 552


How Should the BIG-IP System Handle SSL Connections? This is a very confusing topic for many application owners. They need to secure their application using an SSL tunnel but they are not sure how it should be configured. As we mentioned in the SSL chapter of this book there are currently three different setups when it comes to configuring SSL on the BIG-IP system. These are: ▪ ▪ ▪

SSL Offloading – The BIG-IP handles all SSL connections. Traffic to the pool members is unencrypted. SSL Bridging – The BIG-IP terminates the client SSL session and establishes a new SSL connection to the pool member. SSL Passthrough – The BIG-IP simply forwards the SSL connection to the pool member.

It is very important that you ask the application owner how the SSL session should be configured and, if they are not sure, to then explain to them the advantages and disadvantages of each scenario. If you are configuring SSL Offloading then you will only need a Client SSL Profile. If SSL Termination is used then you will need both a Client SSL Profile and a Server SSL Profile in order for the BIG-IP system to re-encrypt the session. With SSL Passthrough you do not need either of the profiles since the SSL connection will just be forwarded to the pool member.

SSL Cipher Suites The BIG-IP system and TMOS are delivered with two different SSL stacks that use certain cipher suites. For those of you who do not know what a cipher suite is, it is the named combination of authentication and encryption algorithms that are used when negotiating the security settings of a TLS or SSL tunnel. The following list contains examples of common cipher suites: ▪ ▪ ▪ ▪ ▪

TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_DH_RSA_WITH_AES_128_GCM_SHA256

The SSL stacks that are delivered with TMOS is the NATIVE stack which is built into the Traffic Management Microkernel (TMM) and the COMPAT stack which is based upon the OpenSSL library. The NATIVE stack is an optimised SSL stack that the BIG-IP system can optimise using its hardware acceleration and is therefore recommended by F5. Starting in BIG-IP 11.x the default SSL profiles are always using ciphers from the NATIVE SSL stack and if you chose to use the COMPAT stack you will have to manually specify this in the SSL profile. They are written in the following manner: ▪

COMPAT:AES128-GCM-SHA256

F5 creates their own default cipher suite that is applied to the default SSL profile and they name it DEFAULT. This is actually displayed when reviewing the configuration of the clientssl profile:

553 553


The cipher suites that are included in the DEFAULT cipher suite differs from each TMOS version as F5 removes the cipher suites considered to be unsecure. In order to determine which cipher suite is used in your current version, log on to the CLI of the BIG-IP system and enter the following command in bash:

tmm --clientciphers DEFAULT SSL Cipher Mismatch Every year, each component manufacturer releases new and faster hardware in order to increase the performance of our computers. The better performance, the easier it gets to hack the older technologies. The same goes for encryption and that is why major organisations like Google are constantly trying to find weaknesses in the ciphers suites in order to replace them with more secure ones. The browser developers are also removing old and insecure ciphers which can create problems for us who are hosting a webpage as the ciphers we offer our customers are no longer supported by their browsers. When the client is trying to establish an SSL/TLS session with the server, a negotiation takes place where they agree on a cipher. If the server does not offer a cipher suite that is supported by the browser, the client will close the connection with error:

1 2 0.0013 (0.0000) S>CV3.2(2) Alert level fatal value handshake_failure As a BIG-IP administrator, you should always make sure you offer cipher suites that are secure and supported by the web browser. The best way to make sure that you are using a secure and supported cipher suite is to create your own client SSL profile with defined ciphers. This is done by simply editing the Ciphers section of the Client SSL profile. However, choosing the right ciphers can be a bit of a headache as this changes from time to time but it will in the end give you more control.

554 554


The best thing you can do is do some research as to what ciphers are the most common and secure. You can use the online SSL Server Tester from QualysŠ where you get a complete security report for your SSL webpage. Simply browse to the following webpage, https://www.ssllabs.com/ssltest/. Another positive aspect is that a custom built client SSL profile will stay the same after an upgrade while the DEFAULT list might change causing problems for your clients. To read more about changing the cipher suites of an SSL profile, check out the AskF5 article: K17370: Configuring the cipher strength for SSL profiles (12.x - 13.x).

Managing Pool Members Even though the pools and pool members have less options to configure than the virtual servers, it is still essential to configure this correctly in order for the application to work. Again, in order to configure it correctly you must know how the application works and communicates. What ports are the end-servers listening on and what load balancing method should be used? Do the end-servers have the same capacity or do some have better performance? Do the end-servers host other services and we should load balance based on the nodes instead? These are some of the questions you need answers to and best way to find these is to speak with the application owner. Another topic which most application owners do not think about is the monitoring of the application which we cover in greater detail in the next section.

Monitoring In order to provide your clients with the best application, monitoring it is very important. For most applications, you will have multiple pool members in your pool and if one goes down, another one will be available to assist the client. As we discussed in the monitoring chapter, there are a lot of different monitors used for every specific purpose and it really depends on how thorough the check is going to be. Is it enough to monitor the TCP/UDP port or should we assign a check that verifies the content? One common mistake when configuring monitors is that the default monitor is often used. For some monitors, this will work fine but for instance with the HTTP monitor it will render it completely useless in terms of verifying the content of the webpage. This is because the default HTTP monitor is configured to send a GET request to the root of the page (/) and the receive string is configured with an empty value. This means that the monitor does not care if it receives a 404 error or any other mistake, it will still be marked as available. For every content check monitor you need to build one that suits your application. One thing the application owners can do is to create a page called /monitor and this is the page you request in your GET request. On this page, you can have the word OK when the site is functioning correctly and FAILED when it is not. The receive string can be configured to match OK and if that is not present the monitor will mark the pool member/node as offline. This is just one of the examples available when configuring monitors on your BIG-IP system. Just remember to create a monitor that will actually trigger when something is wrong.

555 555


Troubleshooting Virtual Servers How you troubleshoot a virtual server really depends on what issue you are having. However, there are a few guidelines which you can use in order to narrow down the problem.

DNS record For most environments, you will link a virtual server address to a DNS record in order for your clients to more easily remember the address. When you get a report that a virtual server is not functioning correctly, the first thing you should do is verify that the DNS record is actually resolving to the correct virtual server IP address. The best thing you can do is ask the client to perform a resolution of the DNS name so you can see their results. Perhaps the client has a different DNS server than it should have or none at all. If the client cannot resolve the name then the problem lies there. If the site is external, perform a DNS resolution towards a public DNS such as Google’s and see if it can resolve the name. If it cannot then the problem is probably caused by not having a DNS record for the virtual server. You can also ask the client to access the resource using the IP address instead and see if this solves the problem. If that is case, then the problem is definitely with the DNS server.

Is the Traffic Reaching the BIG-IP System? If the DNS record is OK and the client can actually resolve it then check if the traffic is reaching the BIG-IP system. This can be easily done by performing a tcpdump through the CLI. We covered tcpdump earlier in this book but here is an example:

tcpdump –i 0.0 host [client IP] and host [virtual server IP] Enter this command and then try and reach the virtual server. Is the command line presenting any information? Since we are not saving the output to a file the communication should be presented directly in the SSH client. This is presented in the following diagram:

If you are presented with the following result, then the traffic is reaching the virtual server successfully. If you do not see any traffic, then it is most likely blocked by a firewall or is not routed correctly to the BIG-IP system.

556 556


Check the Status of the Virtual Server If the DNS record is correct and you can reach the virtual server but still cannot access the application, then you will need to check the status of the virtual server. Perhaps the pool assigned to the virtual server is currently down? Go into Local Traffic > Virtual Servers > Virtual Server List and search for the virtual server. What is its current status? Is the virtual server marked as offline, then the monitor assigned to virtual server is down probably because something has happened on the end-servers. If it is marked as unknown then we do not have a monitor marking it as offline but something might have happened on the end-server. How we troubleshoot the pool members is covered in the next section. Is the virtual server marked as available then we either have a poorly assigned monitor or there is something configured on the virtual server that is not aligned with the pool members.

What Error Are You Getting When Accessing the Virtual Server? When all of the above has been checked, the best thing in order to rule out the BIG-IP system is to try and access the application directly on the pool members if this is possible. If it is a website, then try to enter the pool member IP address into the web browser and see if you can access it. If you can, then there is something on the BIG-IP system causing the outage. Ask the clients what error they are presented with when trying to access the application. Can they partially load the application and there are some objects not loading or are they receiving a certification warning? Perhaps the BIG-IP system is SSL offloading and the certificate assigned to the Client SSL Profile has expired. If possible then try to access the resource yourself and see if you can replicate the issue. When the virtual server has a resolvable DNS records, can be reached by the clients and the BIG-IP system has access to the pool members (they can be reached on their ports) plus we can access the pool members directly, then the problem is most likely caused by a misconfiguration. Go through the entire virtual server and see if it has all of the profiles and settings it needs in order to function. To find your issue, you can again use tcpdump with the following command:

tcpdump -nn -s0 -i 0.0:nnn host [virtual server IP] or host [pool member 1] or host [pool member 2] or arp or icmp -w /shared/tmp/Cxxxx_tcpdump_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pcap This tcpdump command will gather full sized packets without resolving the ports and IP addresses along with only sniffing for the virtual server and its pool members. Add all of the pool members assigned to the virtual server so you do not miss any packets. It will also capture all of the arp and icmp packets. The packet capture will be saved to /shared/tmp/ where you can download it and view in Wireshark, which is easier to read for most administrators. Analyze the packet capture and see if you can find anything that can cause the current issue. If you for instance see that the BIG-IP system is sending SYN requests to the pool members and not receiving any replies, plus the source IP address in that request is the same as the client IP address (not the BIG-IP) then source NAT is not enabled. The reason you are not receiving a reply is most likely caused by the pool member sending the reply to its default gateway rather than to the BIG-IP system. Enable Source NAT and see if this changes anything.

557 557


If the packet captures reveals that you are receiving reset (RST) packets from the virtual server then verify that you are listening on the correct VLAN or Tunnel. If the traffic is entering a VLAN to which the virtual server is not listening on then the traffic will immediately be rejected and present the output you see in the following picture.

Adjust the Enabled on option and see if this resolves your problem. There is a multitude of different scenarios which you might experience, and it is impossible to go through them all. Gaining practical experience working with the BIG-IP system is best way to learn how you should tackle each problem.

Troubleshooting Pool Members If the monitors on the BIG-IP system are failing for some pool members, there are a few potential issues that might be the cause. It could be that the traffic is not reaching the pool members or if the service is not listening on the endserver. In order to determine this rather quickly you can log on to the BIG-IP system through the CLI. Then try to access the pool member on its respective port using telnet and see if you receive a reply. If you do then review the monitor assigned to the pool. Perhaps the application owners have modified the content which causes the health monitor to fail. If you receive a reset like we present in the following picture, then the service is not available or is being blocked on the end-server and the application owner’s needs to be contacted.

Telnet is a great tool for quickly verifying if the service is listening, but you could also perform a packet capture if you need more detailed information. Since telnet is based upon TCP this cannot be used to test services running on UDP. For that we use the tool Netcat. Netcat is a network utility which is used to read and write data from networks using the command-line. In order to test a UDP service from the BIG-IP system using Netcat, use the following command in bash:

nc -vz -u [IP address] [Port] The result should look like the following:

558 558


nc -vz -u 10.10.15.10 53 Connection to 10.10.15.10 53 port [udp/domain] succeeded! As we mentioned previously in this book, cURL is also a great tool to troubleshoot pool members. You have the ability to generate client traffic from the BIG-IP system towards the pool members to see what response you receive. This can be useful when troubleshooting pool member monitors by sending the same GET request and see what kind of response you receive back. Starting from BIG-IP v11.5 the monitor logging feature was introduced deeming this purpose unnecessary, but cURL is still a great tool for many troubleshooting scenarios. Again, just like in the previous examples, there are many different scenarios for problems that might arise with pool members and going through them all would be impossible. Gaining practical experience working with the BIG-IP system is best way to learn how you should tackle each problem.

Impact When Modifying the Configuration When modifying the configuration there can be an impact either small or big. It really depends on what you modify. Either way, you should always consider the possible outcome and always keep a backup of the currently running configuration before you start modifying it. Therefore, it is good practice to create a UCS backup prior to your changes. One the impacts that you will run into is when you are changing the certificate in the Client SSL Profile. As soon as the certificate has been changed the client connections will be terminated and they will have to renegotiate the SSL sessions. Therefore, it might be a good idea to modify this configuration when the application has the fewest users in order to mitigate the impact.

Changes Not Taking Effect Immediately There are some scenarios where changing the configuration will not immediately take effect. When updating the certificate key pair, you can either upload a complete new pair or upload a new certificate but keep the existing key (overwrite the certificate). No matter how you import the certificate into the BIG-IP system, the new certificate will not be updated until the Client SSL profile is updated. This also goes for the certificates that have the exact same name as the previous ones, because it has the old certificate already loaded into the BIG-IP’s RAM. This is one example of changes not taking effect immediately.

Taking a Pool Member/Node Offline One of the duties of an application owner is to perform life-cycle management and perform upgrades/updates on the end-servers. This means you will occasionally have to take some servers offline. The benefit of having multiple pool members is the freedom to perform upgrades and updates without affecting any services. On the BIG-IP system there are two ways you can take a pool member or node offline and we’ll cover them in the following sections.

Disabled You can set either a node or pool member as Disabled which is the preferred method when planning a maintenance window for a server or application.

559 559


If you set the node as disabled this will also affect all pool members which are using the node object as they are hierarchically linked together. When you set a node or pool member as disabled the following connections are still allowed to communicate: ▪ ▪

Active Connections Persistent Connections

When using disabled, the pool member or node will be slowly taken out of service ensuring the minimal impact of the clients. So long as the client has persistence to the pool member or node they can still access it which can sometimes be a problem. For instance, if the virtual server is configured with cookie persistence with an expiration of 7 days, it will take an entire week for all connections to time out and eventually end up at another pool member/node.

Forced Offline When you need to quickly take a pool member or node offline then the Forced Offline is a far better option. An example would be that you are experiencing a problem on the end-server as the service has crashed and will not come back online again. When this happen, you should set the pool member/node as Forced Offline. When a pool member or node is set to Forced Offline the following connections are still allowed to communicate: ▪

Active Connections

We previously mentioned that configuration would not take effect immediately and this is one of them. When you receive an urgent call from the application owners requesting you to take down a specific server, if you choose Disabled instead of Forced Offline you might still hear from the application owner that the server is still available. This is because the persistent connection is still allowed through. Solve this by choosing Forced Offline.

560 560


If you are wondering why existing connections are still allowed to complete, it is because F5 recommends that functioning existing connections should have time to finish their operations before being terminated. You might not agree with this and may, instead, like to terminate the connections. You can do this by performing the following instructions:

Deleting Existing Connections to a Pool Member 1. 2.

Log in to the BIG-IP command line. Delete all connections to the pool member by using the following command syntax:

tmsh delete /sys connection ss-server-addr [member IP address] ss-server-port [member server port] For example, to delete all port 80 connections to the pool member 10.10.15.250:http, you would type the following command:

tmsh delete /sys connection ss-server-addr 10.10.15.250 ss-server-port 80 Deleting Existing Connections to a Node 1. 2.

Log in to the BIG-IP command line. Delete all connections to the node by using the following command syntax:

tmsh delete /sys connection ss-server-addr [node IP address] For example, to delete all connections to the node 10.10.15.250, you would type the following command:

tmsh delete /sys connection ss-server-addr 10.10.15.250

Running this command will delete all connections to the address 10.10.15.250 no matter what pool members are using the node. You can also take a pool member or node directly offline by assigning it a faulty monitor, causing it to go offline. This will terminate any connection to it.

RST Logging Sometimes when troubleshooting you may discover that the traffic is ended with a reset (RST) packet, but there is no explanation as to what triggered this reset. You can configure the logging of statistics and the cause of TCP reset packets being sent using this command:

tmsh modify sys db tm.rstcause.log value enable You can then use the following command to view statistics on high level causes:

tmsh show net rst-cause

561 561


You can view more detailed information on each RST in the Local Traffic log file through the GUI or using this command:

tmsh show sys log ‘name’ For further information see the article: K13223: Configuring the BIG-IP system to log TCP RST packets.

Persistence Issues Before we start discussing persistence issues, we should explore OneConnect and Pool Member Failures and their effect on persistence.

OneConnect OneConnect shouldn’t override persistence behavior when enabled; persistence takes precedence. If it didn’t, every request would be load balanced independently and persistence simply wouldn’t be possible. You can, of course, still benefit from OneConnect’s connection reuse. However, only connections to the relevant persisted Pool Member are available to be reused by a specific session (rather than all connections to any server).

OneConnect is covered in the Introduction chapter of this book.

Pool Member Failure When a Pool Member is marked down, any persistence entries related to it are removed. Where the Cookie Insert, Passive or Rewrite methods are used, the cookie isn’t changed, which may result in unwanted client-side recovery delays. To avoid this, I’d recommend setting Action on Service Down for the pool to Reject. This will immediately force the client to establish a new connection which will then be load balanced and persisted to an operational Pool Member. This doesn’t avoid the loss of state but minimises the delay experienced by the user before that becomes apparent. If the cookie timeout is not configured as Session, it will be updated accordingly.

Troubleshooting Persistence Issues Troubleshooting a suspected persistence issue is obviously dependent on the method(s) configured. That being the case, we’ll explore the possible causes of issues and the tools we can use to investigate them, on a per method basis. Before we do, lets quickly cover typical symptoms of an issue and the steps that should always be followed when troubleshooting; Symptoms With common client/server traffic flows, the symptoms of a persistence issue could include: ▪ ▪ ▪

Loss of state indicated by the loss of, for example, user preferences, shopping basket contents, user entered detail or site/page position. Authentication issues or complete loss. Web Pages not loading or unexpected return to the home page.

562 562


Other traffic types might suffer from: ▪ ▪ ▪

Higher than expect load on onward routers, firewalls, proxies or caches, possibly causing a reduction in performance and user experience. Dropped calls or loss of advanced phone features. Video streaming failures.

Core Steps Whatever the persistence method, these core troubleshooting steps are always worthwhile: ▪

▪ ▪ ▪ ▪ ▪ ▪

Confirm if an actual pool member failure has occurred. If this is the case, any connections persisted to it will be load balanced to a different server and likely session state lost. Volatile pool member status will obviously cause extended issues. Verify the configuration. Confirm if the load balancer configuration has been changed recently. Confirm if other infrastructure changes have occurred (a new proxy, NAT changes etc.) Confirm the persistence profile idle timeout is set to at least the value of the transport protocol profile idle timeout. Confirm there is no iRule logic or actions interfering with persistence. Verify (if appropriate) whether valid persistence table entries exist and are not frequently changing.

Keep in mind established connections are not affected by configuration changes (including iRule changes). Impacting Features You should confirm whether any of the following features are applied and if so, discern if they are having an impact on expected behavior: ▪ ▪

Priority Group Activation (detailed in the Load Balancing Modes section of the BIG-IP Administration). OneConnect (detailed in the Introduction chapter).

Source Address (aka Simple) Persistence Possible troubleshooting targets and questions to ask include: ▪ ▪ ▪ ▪

Is the source address correct, as expected and configured? Has a proxy been introduced? Are clients using a different proxy the configuration doesn’t account for? Are client’s being NATted? Has the NAT address changed?

563 563


Available tools that can be used to determine the facts: ▪ ▪

Use the ip or ipconfig command on a client to confirm the IP address assigned is as expected (this won’t help where a proxy or NAT is involved). Use tcpdump on the BIG-IP to confirm the IP addresses being seen.

Cookie Persistence Possible troubleshooting targets and questions to ask include: ▪ ▪ ▪

Is the cookie being inserted by the BIG-IP, or, if using Hash, Passive or Rewrite, is the cookie being inserted by the server. Is there a proxy or security device stripping the cookie? Are client browser settings causing the cookie to be ignored?

Available tools that can be used to determine the facts: ▪ ▪ ▪

Use tcpdump on the BIG-IP to confirm the presence and use of the cookie. Use an iRule to confirm the presence and use of the cookie and log appropriately. Use the client browser built in developer tools to confirm the cookie is received and sent.

Destination Address Persistence ▪ ▪ ▪ ▪

Is the destination address correct, as expected and configured? Has a proxy been introduced? Are clients using a different proxy the configuration doesn’t account for? Is the destination address being NATted? Has the NAT address changed?

Available tools that can be used to determine the facts: ▪

Use tcpdump on the BIG-IP to confirm the IP addresses being seen.

Hash Persistence Possible troubleshooting targets and questions to ask include: ▪ ▪ ▪ ▪ ▪

If used with HTTP traffic, is OneConnect also configured? Is the correct iRule assigned? Does the iRule also specify the persistence method? It should. Is the specified data present in the right location? Does the specified data change?

Available tools that can be used to determine the facts: ▪ ▪ ▪

Use tcpdump on the BIG-IP to confirm the data is present and as expected (and doesn’t change) If using HTTP, use the client browser built in developer tools to confirm the data is present and as expected Add appropriate logging to the iRule

564 564


Microsoft Remote Desktop Protocol (RDP) Persistence Possible troubleshooting targets and questions to ask include: ▪ ▪ ▪

Is connection broker Load balancing disabled on the real servers? This is only supported with Windows Server 2012. Are the first nine characters of some usernames identical? See article K9093: MSRDP persistence may persist RDP sessions to the incorrect MSRDP server for further information. Is the destination server in a non-default route domain? This is not supported.

Available tools that can be used to determine the facts: ▪ ▪

Use tcpdump on the BIG-IP to confirm that the first nine characters of all usernames are unique. Examine the mstshash routing token. Examine the client configuration to confirm that the first nine characters of all usernames are unique.

SIP persistence Persistence Available tools that can be used to determine the facts: ▪ ▪

Use tcpdump on the BIG-IP to confirm the session ID is unique and doesn’t change mid-session. Use a packet capture tool on the client to confirm the session ID is unique and doesn’t change mid-session.

SSL Persistence Possible troubleshooting targets and questions to ask include: ▪

Has renegotiation occurred? If it has, the SSL Session ID won’t be available.

Available tools that can be used to determine the facts: ▪ ▪

Use tcpdump on the BIG-IP to confirm the session ID is present, unique and doesn’t change mid-session Use a packet capture tool on the client to confirm the session ID is present, unique and doesn’t change midsession

Universal Persistence Possible troubleshooting targets and questions to ask include: ▪ ▪ ▪ ▪

If used with HTTP traffic, is OneConnect also configured? Is the correct iRule assigned? Is the specified data present in the right location? Does the specified data change?

Available tools that can be used to determine the facts: ▪ ▪ ▪

Use tcpdump on the BIG-IP to confirm the data is present and as expected (and doesn’t change) If using HTTP, use the client browser built in developer tools to confirm the data is present and as expected Add appropriate logging to the iRule

565 565


Chapter Summary ▪

Since the BIG-IP system has masses of ways in which it can be configured, this means it is essential to know how the application works and communicates. What protocols does it use? Should the virtual server use SSL termination or does the SSL session have to be terminated on the end-server? Does the application require persistence? If so, how should we persist the client connections?

The VLAN and Tunnel Traffic option defines on what VLAN and Tunnel the virtual server should “listen” and take in traffic. Therefore, it is essential that you know where the client will access the virtual server. Otherwise you open up to a lot of security vulnerabilities because the default option is to listen on all VLANS and Tunnels.

In order to provide your clients with the best application, monitoring it is very important. For most applications, you will have multiple pool members in your pool and if one goes down, another one will be available to assist the client.

Telnet is a great tool for quickly verifying if the service is listening, but you could also perform a packet capture if you need more detailed information.

OneConnect shouldn’t override persistence behavior when enabled, persistence takes precedence. If it didn’t, every request would be load balanced independently and persistence simply wouldn’t be possible.

When a Pool Member is marked down, any persistence entries related to it are removed.

Where the Cookie Insert, Passive or Rewrite methods are used, the cookie isn’t changed, which may result in unwanted client-side recovery delays. Avoid this by setting the Action on Service Down for the pool to Reject. This will immediately force the client to establish a new connection which will then be load balanced and persisted to an operational Pool Member.

Chapter Review 1. Is ARP involved in packet processing functions? a. b.

Yes No

2. Which of the below virtual servers will process traffic destined for: 10.1.33.199:80 a. b. c. d. e. f.

Specific IP address and specific port - 10.0.33.199:80 Specific IP address and all ports - 10.0.33.199:* Network IP address and specific port - 10.0.33.0:443 netmask 255.255.255.0 Network IP address and all ports - 10.0.33.0:* netmask 255.255.255.0 All networks and specific port - 0.0.0.0:80 netmask 0.0.0.0 All networks and all ports - 0.0.0.0:* netmask 0.0.0.0

566 566


3. Which of the below virtual servers will process traffic destined for: 10.0.33.150:443 a. b. c. d. e. f.

Specific IP address and specific port - 10.0.33.199:80 Specific IP address and all ports - 10.0.33.199:* Network IP address and specific port - 10.0.33.0:443 netmask 255.255.255.0 Network IP address and all ports - 10.0.33.0:* netmask 255.255.255.0 All networks and specific port - 0.0.0.0:80 netmask 0.0.0.0 All networks and all ports - 0.0.0.0:* netmask 0.0.0.0

4. Which of the below virtual servers will process traffic destined for: 10.0.33.199:443 a. b. c. d. e. f.

Specific IP address and specific port - 10.0.33.199:80 Specific IP address and all ports - 10.0.33.199:* Network IP address and specific port - 10.0.33.0:443 netmask 255.255.255.0 Network IP address and all ports - 10.0.33.0:* netmask 255.255.255.0 All networks and specific port - 0.0.0.0:80 netmask 0.0.0.0 All networks and all ports - 0.0.0.0:* netmask 0.0.0.0

5. What profiles do you need to have assigned to a HTTPS virtual server in order to configure SSL End-to-End Encryption? (Select all that apply) a. b. c. d. e. f. g.

FTP HTTP UDP TCP Client SSL Profile Server SSL Profile DNS

6. What network utility can be used to test services running on UDP? a. b. c. d.

ping telnet tracert nc (netcat)

7. Will persistent connections still be accepted after marking a pool member as Disabled? a. b.

Yes No

567 567


Chapter Review: Answers 1. Is ARP involved in packet processing functions? a. b.

Yes No

The correct answer is: b However, without correct ARP operation, traffic will not reach the device at all. 2. Which of the below virtual servers will process traffic destined for: 10.1.33.199:80 a. b. c. d. e. f.

Specific IP address and specific port - 10.0.33.199:80 Specific IP address and all ports - 10.0.33.199:* Network IP address and specific port - 10.0.33.0:443 netmask 255.255.255.0 Network IP address and all ports - 10.0.33.0:* netmask 255.255.255.0 All networks and specific port - 0.0.0.0:80 netmask 0.0.0.0 All networks and all ports - 0.0.0.0:* netmask 0.0.0.0

The correct answer is: e 3. Which of the below virtual servers will process traffic destined for: 10.0.33.150:443 a. b. c. d. e. f.

Specific IP address and specific port - 10.0.33.199:80 Specific IP address and all ports - 10.0.33.199:* Network IP address and specific port - 10.0.33.0:443 netmask 255.255.255.0 Network IP address and all ports - 10.0.33.0:* netmask 255.255.255.0 All networks and specific port - 0.0.0.0:80 netmask 0.0.0.0 All networks and all ports - 0.0.0.0:* netmask 0.0.0.0

The correct answer is: c 4. Which of the below virtual servers will process traffic destined for: 10.0.33.199:443 a. b. c. d. e. f.

Specific IP address and specific port - 10.0.33.199:80 Specific IP address and all ports - 10.0.33.199:* Network IP address and specific port - 10.0.33.0:443 netmask 255.255.255.0 Network IP address and all ports - 10.0.33.0:* netmask 255.255.255.0 All networks and specific port - 0.0.0.0:80 netmask 0.0.0.0 All networks and all ports - 0.0.0.0:* netmask 0.0.0.0

The correct answer is: b

568 568


5. What profiles do you need to have assigned to a HTTPS virtual server in order to configure SSL End-to-End Encryption? (Select all that apply) a. b. c. d. e. f. g.

FTP HTTP UDP TCP Client SSL Profile Server SSL Profile DNS

The correct answer is: d, e and f. You first need to assign a protocol profile. Since SSL and HTTP uses TCP, you will need to assign a TCP profile. Since it is an HTTPS virtual server, we need to assign a Client SSL profile to the virtual server. This is because the first packet following the TCP 3-Way Handshake will be an SSL Hello (SSL handshake) and in order for the BIG-IP system to understand this it will need an SSL profile. Lastly, since we are configuring End-To-End Encryption, the pool members will be listening for HTTPS traffic, meaning they expect an SSL handshake as well. That is why we need to assign a Server SSL profile too. Just because it is an HTTPS virtual server does not mean we are forced to add an HTTP profile. However, not adding one will mean that the BIG-IP will only process the HTTP traffic up to layer 4 prohibiting us from reading/modifying the HTTP payload. 6. What network utility can be used to test services running on UDP? a. b. c. d.

ping telnet tracert nc (netcat)

The correct answer is: d Since telnet is based upon TCP this cannot be used to test services running on UDP. For that we use the tool Netcat. Netcat is a network utility which are used to read and write data from networks using the command-line. 7. Will persistent connections still be accepted after marking a pool member as Disabled? a. b.

Yes No

The correct answer is: a When you mark a node or pool member as Disabled, Active and Persistent Connections are still accepted.

569 569


19. Troubleshooting Performance It can often be very hard to troubleshoot performance issues and identify their root cause by using tools that are useful in other scenarios. For this reason, packet capture is typically the only option left to help us diagnose an issue. This chapter will go into considerable detail on the most common Linux packet capture tool in use today: tcpdump. We’ll also cover Wireshark, which aids with capture analysis and is available for Linux, Windows and macOS. Lastly, we’ll cover some additional tools that may also help in conjunction with packet capture.

Packet Captures Why Should We Capture Packets? So, what does packet capture provide that existing tools, logs and other sources of data cannot? Put simply; all the data we need. Consistently and constantly logging every connection, every packet that passes across it and other metrics and timings on both sides of the proxy, would be unmanageable and a huge waste of resources. Instead, we perform a packet capture which, if well specified, will provide us with all the necessary detail and data for the entire communication flow. Doing this will hopefully allow us to pinpoint the problem, or at least where it occurs, allowing for more focused troubleshooting at that point.

When Should We Capture Packets? Before you perform a packet capture, it’s important to establish the frequency of a given issue. Is it random or continuous, isolated or widespread, intermittent or repetitive? The answers to these questions will help you to determine when might be a good time for you to perform a packet capture. You’ll have more choice and flexibility if an issue presents itself continuously or repetitively and where it affects the majority of traffic. There are also other factors that will influence your decision, some business related, some not; ▪

Times of low business activity are normally preferable, but useless if there is no activity at all.

Businesses often have specific change windows; limited periods of the day (and perhaps also specific days themselves) when changes and/or high-risk activities, such as enabling packet capture can occur.

Times of high device resource usage (particularly network throughput and CPU) are clearly best avoided.

Your ability and confidence to limit the scope of the amount of traffic captured and consequently the potential impact on the device’s ability to perform its primary function.

The value and importance of the services provided through the device.

Where Should We Capture? Due to the Full Application Proxy architecture, the BIG-IP is often the best place to perform a packet capture, this ensures both the client-side and server-side flows can be captured and examined. You would use the command line tcpdump program to do so, observing and analysing output in real time or writing to a .pcap file and performing your analysis later using Wireshark. You can also capture on a client or server, using either tcpdump (Linux CLI) or Wireshark (Linux desktop or Windows) as necessary, provided it makes sense to do so. An example of when this might be appropriate is if a single host is suffering or suspected. We’ll cover using tcpdump and Wireshark in subsequent sections.

570 570


Where possible or preferable, you might also be able make use of a network tap; a dedicated and transparent device placed and cabled between two (or more) network devices, which duplicates the traffic passing through it. Similarly, you may also have the option of using the port mirroring function directly on a device (both the F5 or a switch). Cisco call this feature Switch Port ANalyser (SPAN) on their equipment. These options all provide some degree of safety compared to running the capture ‘on the box’ itself. This is because the processing and capture of the packets occurs on the device you attach to the tap, port mirror interface or SPAN port.

What Are We Looking For? There are a wide range of common, frequently encountered issues and therefore troubleshooting targets to consider where performance is concerned. Keep in mind however that issues you identify and observe may not be the primary issue; the root cause of the reported problem. Packet loss on a high bandwidth low latency network connecting client and server is unlikely to have a significant impact on the performance of a HTTP based application. Little that is networking related will, but an overtaxed DNS server that resides on another continent will certainly introduce significant delay in establishing the initial connection. Additionally, the impact of unknown intermediate network and application layer devices cannot be underestimated. You may see delays in responses from the server on the client in addition to delays in responses from the client on the server (the F5 in this case). Perhaps because there is a web proxy between the two that is performing poorly. Clearly, the more you understand about the network and the path between hosts the better. This isn’t always possible of course. Here’s a list of potential issues that should be relatively easy to identify: ▪

Delays between packets: You observe the client send a SYN, the F5 doesn’t respond for some considerable time. Alternatively, the F5 responds quickly with a SYN+ACK but the client’s ACK takes a long time to arrive. This may also be seen server side. You may not know why but now a targeted further investigation of the slow host (either F5 or client or even server) is possible. If a host isn’t responsible; perhaps due to resource constraints on CPU, memory or network bandwidth, then you can investigate the possibility of a high latency link or malfunctioning or resource constrained network device in the network path between devices where the delay is observed.

TCP failures: A SYN is received, but no return SYN+ACK is sent (perhaps because a packet filter or iRule is silently dropping packets.). Or maybe no SYN is received at all (due to an intermediate firewall or incorrect routing on the client or server.)

TCP Resets from the F5 to the client; these likely indicate;

An idle timeout has been exceeded.

A connection limit has been exceeded.

Packet filter denials.

An unmatched connection (more likely if in response to a SYN packet).

No available pool members.

571 571


Failover occurring.

A TCP protocol timeout or retry limit being exceeded.

HTTP Protocol or content errors.

An iRule error.

SNAT Port exhaustion.

Slow SSL connection establishment: Possibly one or both of Nagle’s algorithm or delayed ACKs are enabled on the F5. This would also be the case with slow CIFS or Citrix traffic flows.

New SSL connections intermittently failing: this could be due to SSL TPS limits being reached.

Poor HTTP performance (more likely over long distance or high latency links): clients may be using HTTP/1.0 (which established a new connection per request).

MTU issues: The 3WHS occurs just fine but data packets from the F5 are resent multiple times before they are acknowledged. If the acknowledged packets don’t have the DF bit set but the failing packets do, this may be an MTU issue.

Fragmented packets from the client: a large number of these suggest an upsteam network device has a lower than optimal MTU and is fragmenting, which would normally reduce performance and throughput significantly.

A high number of retransmissions from the F5 to the client which are not acknowledged until the DF flag is not set: This suggests the MTU between the F5 and client is lower than either is aware of. This could be due to the use of tunnelling such as a VPN somewhere in the network path between the two.

Three or more duplicate ACKs: Three or more ACKs with the same sequence number generally indicates packet loss (detected by the sender of the duplicates) and the use of Fast Retransmission (where SACK is in use). Packet loss isn’t unusual but if it’s continuous or occurs at specific times of the day it suggests congestion is occurring and bandwidth is constrained somewhere on a particular path or link.

Frequent duplicate ACKs for the same data: this indicates packet loss elsewhere in the network, likely caused by congestion.

Out of order packets: a high number will normally lead to duplicate ACKs (each for different data) and complete connection failure and indicate issues in the wider network. You can identify in order packets by observing the sequence number of packets that arrive and confirming each is higher than that of the prior packet. Packets that arrive with a lower sequence number than prior packets are out of order (although retransmissions should be taken into account.)

Consistent, regular lack of traffic from the client: may be caused by slow client-side DNS or NetBIOS lookups.

A high number of CRC errors: these indicate issues with a local network interface (or its settings) or the same on a directly connected upsteam network device.

572 572


Slow return traffic from the F5 or real servers: these may be caused by backend application, database, authentication, DNS lookup and similar delays.

Lower than expected throughput: perhaps related to the F5, client or server congestion control or window size settings.

Expected TCP/IP Behaviours It’s always a good idea to understand the standard and expected TCP/IP behaviour you would see in normal operation. You can’t easily know what is abnormal or unusual without establishing that baseline for comparison otherwise.

Using tcpdump Enough of the whys and wherefores, let’s find out how to perform a packet capture, starting with the venerable and trusty tcpdump. tcpdump is only available with Linux operating systems, is well over 25 years old and was originally created by Van Jacobson, Craig Leres and Steven McCanne, all of the Lawrence Berkeley National Laboratory, University of California, Berkeley, CA. As a BIG-IP is a Linux based device (without a window manager), tcpdump is what you’ll have to use to capture packets. You can either perform your analysis in real time in a terminal or write the capture to a file, transfer it off the device and then perform your analysis using Wireshark using a host that is capable of running it.

Limitations Before you even think about performing a capture, keep in mind the following limitations and recommendations when using tcpdump on an F5 device; ▪

Packet capture is considered best effort only (there is no guarantee all packets will be captured) - this isn’t unusual and applies to just about every device and program except where TAPs are concerned.

F5 recommend that you run tcpdump on a VLAN interface in most circumstances. Run tcpdump on a physical interface only when performing basic connectivity, rather than payload or application layer protocol related troubleshooting.

Running tcpdump on a physical interface, rather than a VLAN, is rate-limited to 200 packets per second.

573 573


For systems containing a ePVA FPGA chip, tcpdump will not capture virtual server traffic that is fully accelerated by the PVA chip where a FastL4 profile is assigned. You’ll need to temporarily disable acceleration if you want to capture traffic. To check for the presence of a PVA chip on a device, run this command: tmsh show sys hardware |grep -i pva. If one is present, you’ll need to edit the relevant FastL4 profile using this menu path: Local Traffic > Profile > Protocol > FastL4 > profile_name > PVA Acceleration and select None Do keep in mind disabling PVA acceleration may affect the performance of the device if it is under high load.

You cannot capture traffic if you run tcpdump in a non-default route domain. To capture traffic in a non-default route domain, F5 recommends that you run the tcpdump command from the default route domain (route domain 0) and specify interface 0.0. If you don’t want to do this you must specify the partition name before the VLAN name like so: /partition_name/vlan_name.

Usage Syntax tcpdump syntax is relatively simple; you specify the command itself, an interface, any optional parameters you require and an optional expression to limit the traffic. Here’s how that looks:

tcpdump -i interface [parameter(s)] [expression] It’s dangerous to not specify an expression to filter what is captured. System stability may be seriously impacted if a great deal of traffic is captured and you may find you are unable to stop the capture and the system fails. In any case, we would highly recommend using the -c parameter with a small value (e.g. 200) to test your command syntax doesn’t capture too much. Unless a parameter requires a value, then it can be combined with other parameters, thus this:

tcpdump -i vlan12 -c 100 -v -n -e Can be compressed into this:

tcpdump -i vlan12 -c 100 -vne All of these elements will be covered in more detail soon. A capture can be stopped with the keyboard shortcut [Ctrl]+C.

Specifying an Interface First, we must specify an interface with the -i parameter and the name of a physical interface, a trunk, a VLAN, a VLAN group or tunnel:

tcpdump -i interface ...

574 574


An interface argument of any or 0.0 is supported; this captures packets from all interfaces. It’s preferable to specify an interface to limit the scope of what is captured. Even when using an expression to filter what is captured system stability may be seriously impacted if a great deal of traffic is captured and you may find you are unable to stop the capture and the system fails. In any case, we would highly recommend using the -c parameter with a small value (say 200) to test your command syntax doesn’t capture too much.

You do not need to specify the interface if you wish to capture traffic on the lowest numbered, configured interface on the system (often eth0 - the management interface.) Loopback interfaces are ignored. Use the ip link or the (deprecated) ifconfig command to display information on the interfaces available on the system.

Capturing Additional TMM Information You can capture additional TMM related information by adding a colon and noise amplitude suffix to the interface specification like so:

tcpdump -i vlan12:n The noise amplitude consists of up to three n characters, with one providing the least noise and three the most. The additional information added is as follows; ▪

n - Ingress, Slot, TMM and VIP

nn - Flow ID, Peer ID, Reset Cause, Connflow Flags, Flow Type, High Availability Unit, Ingress Slot and Ingress Port

nnn - Peer IP Protocol, Peer VLAN, Peer Remote Address, Peer Local Address, Peer Remote Port and Peer Local Port

Take a look at this F5 Knowledge article for further details on how to interpret this information: K13637: Capturing internal TMM information with tcpdump. F5 Have also released a very handy Wireshark plugin that decodes and presents this additional information in a useful, suitable for human consumption format. See the Wireshark section for more information.

Default Output Unless you are writing the capture to a file, tcpdump will display packets on standard output; in other words, in your terminal, shell or console.

575 575


Writing to a File If you plan on using Wireshark to perform your analysis or you need to pass the capture to some other party, you’ll need to write captured packets to a file. Use the -w parameter and specify a file name and optionally a path to the file.

tcpdump -i interface -w [dir/]file_name ... There’s no need to specify a file extension but we’d recommend you use .pcap so it’s immediately obvious what type of file this is when browsing the file system. It may also make your life easier if you copy the file elsewhere for further analysis. We would highly recommend you write capture files to the /tmp directory and then move them to an alternative location if you need to keep them. Files in this directory are removed on boot which provides some automatic housekeeping should you forget to remove old files yourself. Also, should you find you accidentally exhaust the host’s disk space with the capture, the problem is resolved on reboot. On the downside, should the system unexpectedly reboot for any reason, you’ve lost your capture file. That or you may forget to move a file you want before performing a planned reboot. When writing to a file you may exhaust the host’s disk space if a great deal of traffic is being captured. To avoid this issue ensure you do one or more of the following; ▪

Test your capture first, without saving to a file and ensure the expression used is specific enough that an excessive amount of traffic is not being captured

Monitor the size of the file

Use the -c parameter to restrict the capture to a specific number of packets, as detailed shortly

You should note the following for real world usage (not the exam); ▪

The file format used is libpcap

If you specify the name of an existing file, it will be overwritten without warning!

If two or more instances of tcpdump specify the same output file, only the output of the last instance started will be recorded to the file

You can suffix your file name with: `date +%Y_%m_%d` (the back tick ` is found just under the [Esc] key) to ensure it’s appended with the current date in the format YYYY_MM_DD. Here’s an example demonstrating all these recommendations: . tcpdump -i interface -w /var/tmp/file_namedate +%Y_%m_%d.pcap

Restricting the Number of Packets Captured To restrict the number of packets captured use the following syntax:

tcpdump -i interface -c nn ... Using this parameter is particularly sensible to avoid issues when;

576 576


▪ ▪ ▪

You expect a great deal of output (so much you may be unable to stop the capture) You are writing the capture to a file and want to ensure you do not exhaust the host’s disk space You are running an unattended capture

Quick Mode Use quick mode to display only each packet’s time, source address and port, destination address and port, protocol (tcp/udp,) data (not packet) length and whether the Don’t Fragment (DF) bit is set or not, as follows;

tcpdump -i interface -q ... This parameter is very good for ensuring all data for a packet displays on a single line of output, as shown below;

14:04:10.381763 10.68.5.122.10050 > 10.68.5.9.49702: tcp 0 (DF) Here’s what you would get without quick mode;

14:04:17.370776 10.68.5.122.10050 > 10.68.5.9.49761: S 3293224573:3293224573(0) ack 1427800123 win 16384 <mss 1460,nop,wscale 0,nop,nop,timestamp 0 0,nop,nop,sackOK> Verbose Mode The opposite of quick mode, specified as follows;

tcpdump -i interface -v ... tcpdump will display additional fields including flags, TTL and packet length, as this example output shows;

14:05:04.395870 10.68.5.122.10050 > 10.68.5.9.50187: P 1449:1700(251) ack 23 win 65513 <nop,nop,timestamp 5953631 522357663> (DF) (ttl 128, id 7979, len 303) -vv - will display additional protocol and application specific fields. -vvv - will display even more protocol and application specific fields.

Capturing Link Level (Layer 2 – Data Link) Headers Specified with this parameter;

tcpdump -i interface -e ... tcpdump will display link level (layer 2) information not displayed by default, such as source and destination MAC addresses, layer 3 protocol and frame size. Below are two example captures, the first without this option specified, the second with;

577 577


tcpdump -i vlan2 host 10.68.5.9 and icmp; 12:39:08.589829 10.68.5.9 > 10.68.5.121: icmp: echo request (DF) 12:39:08.590352 10.68.5.121 > 10.68.5.9: icmp: echo reply (DF) tcpdump -i vlan2 -e host 10.68.5.9 and icmp; 12:38:53.660102 0:1:d7:57:3:c8 0:21:5a:45:57:42 ip 54: 10.68.5.9 > 10.68.5.121: icmp: echo request (DF) 12:38:53.660629 0:21:5a:45:57:42 0:1:d7:57:3:c8 ip 60: 10.68.5.121 > 10.68.5.9: icmp: echo reply (DF) Capturing Packet Contents – Format If you’re not capturing to a file, you can display the content of each packet’s payload, which can be highly useful with text-based application layer protocols such as HTTP. Of course, a tool like Wireshark makes such analysis easier to interpret. -x - will display packet content in Hex - I’ve no idea why this would be useful. -X - will display packet content in both Hex and (hopefully human readable) ASCII. Below are two example captures, the first with -x specified, the second with -X;

tcpdump -i vlan28 -x -s0 host 104.20.24.212 and port 80; 21:10:22.388159 IP test.york.com.45734 > 104.20.24.212.http: Flags [P.], seq 1184:1774, ack 1446, win 307, options [nop,nop,TS val 2443065 ecr 4551680], length 590: HTTP: GET /Design/graphics/icon/rss.svg HTTP/1.1 0x0000: 4500 0282 eb00 4000 4006 b5df 0a0b 0ca3 0x0010: 6814 18d4 b2a6 0050 dc71 36f8 9de0 4cf5 0x0020: 8018 0133 305d 0000 0101 080a 0025 4739 0x0030: 0045 7400 4745 5420 2f44 6573 6967 6e2f 0x0040: 6772 6170 6869 6373 2f69 636f 6e2f 7273 0x0050: 732e 7376 6720 4854 5450 2f31 2e31 0d0a 0x0060: 486f 7374 3a20 7777 772e 7468 6572 6567 0x0070: 6973 7465 722e 636f 2e75 6b0d 0a43 6f6e 0x0080: 6e65 6374 696f 6e3a 206b 6565 702d 616c 0x0090: 6976 650d 0a43 6163 6865 2d43 6f6e 7472 0x00a0: 6f6c 3a20 6d61 782d 6167 653d 300d 0a41 0x00b0: 6363 6570 743a 2069 6d61 6765 2f77 6562

578 578


tcpdump -i vlan28 -X -s0 host 104.20.24.212 and port 80 21:11:50.449491 IP test.york.com.46307 > 104.20.24.212.http: Flags [P.], seq 1184:1774, ack 1446, win 307, options [nop,nop,TS val 2443065 ecr 4551680], length 590: HTTP: GET /Design/graphics/icon/rss.svg HTTP/1.1 0x0000: 4500 0282 eb00 4000 4006 b5df 0a0b 0ca3 E.....@.@....... 0x0010: 6814 18d4 b2a6 0050 dc71 36f8 9de0 4cf5 h......P.q6...L. 0x0020: 8018 0133 305d 0000 0101 080a 0025 4739 ...30].......%G9 0x0030: 0045 7400 4745 5420 2f44 6573 6967 6e2f .Et.GET./Design/ 0x0040: 6772 6170 6869 6373 2f69 636f 6e2f 7273 graphics/icon/rs 0x0050: 732e 7376 6720 4854 5450 2f31 2e31 0d0a s.svg.HTTP/1.1.. 0x0060: 486f 7374 3a20 7777 772e 7468 6572 6567 Host:.www.thereg 0x0070: 6973 7465 722e 636f 2e75 6b0d 0a43 6f6e ister.co.uk..Con 0x0080: 6e65 6374 696f 6e3a 206b 6565 702d 616c nection:.keep-al 0x0090: 6976 650d 0a43 6163 6865 2d43 6f6e 7472 ive..Cache-Contr 0x00a0: 6f6c 3a20 6d61 782d 6167 653d 300d 0a41 ol:.max-age=0..A 0x00b0: 6363 6570 743a 2069 6d61 6765 2f77 6562 ccept:.image/web Both of these parameters capture and display only the first 68 Bytes of each IPv4 packet (96B for IPv6) by default unless the -s parameter is used, as is the case here. See the next section for more information on this parameter. This option is not necessary if you are writing the capture to a file, this option only applies when using tcpdump to display packets in real time or from a capture file.

Capturing Packet Contents – How Much? tcpdump will capture the number of Bytes of each packet specified with the -s snapshot length parameter (the default is 68 for IPv4 packets and 96 for IPv6 packets):

tcpdump -i interface -s bytes ... Use -s0 to capture the entirety of every packet, regardless of size.

Disabling DNS Lookups When the -n parameter is used, tcpdump will not translate host addresses to host names; thus disabling DNS lookup of host IP addresses.

tcpdump -i interface -n ...

Not using this option could potentially result in a huge amount of DNS requests, creating unnecessary load on your DNS servers and the BIG-IP system.

Also Disabling Service Name Lookups Using a second n parameter prevents tcpdump translating port and protocol numbers to service names, (port 80 to http for example,) as well as preventing the translation of host addresses to host names.

579 579


tcpdump -i interface -nn Reading from a File To read from a capture file that was previously written using the -w parameter:

tcpdump -r [dir/]file_name tcpdump will display the entire contents of the file, without pause, so you may want to make use of the commands: more or less to control and ‘browse’ the output in an orderly way.

tcpdump -r [dir/]file_name | more

tcpdump Expressions tcpdump expressions (often called filters in F5 documentation) are used to limit what is actually captured (and displayed or written to file). This is clearly advantageous to ensure we can limit the scope of our capture to traffic that we actually want to observe. There are few situations where it is helpful to capture everything. In addition, we can also use commands such as grep to further filter the output. Doing this is sometimes easier than constructing complex expressions. If you’d like to know even more about expressions, further information is always available via the man pcap-filter command, on Linux hosts at least. It’s normally best to construct your expressions to match and therefore capture traffic to and from the host address furthest away from the device where you’re running tcpdump. Doing so should reduce the ‘background noise’ from hosts closest to the device, such as management, monitoring and other traffic that you most likely don’t want to see. An example of this would be running a capture on an F5 device in an attempt to observe HTTPS traffic to a virtual server listening on IP address 10.11.12.99 from a remote host with address 100.111.222.50. You could use this syntax and expression:

tcpdump -i any host 10.11.12.99 This would match your desired traffic, but also any other traffic passing through the virtual server (or others with the same address). This next expression would be far more specific and match packets to or from the remote host only:

tcpdump -i any host 100.111.222.50

580 580


Better yet, let’s specify the port we’re interested in (HTTPS: 443):

tcpdump -i any host 100.111.222.50 and port 443 We might still capture unwanted traffic if the remote host is also connecting to other virtual servers on the device using port 443. We can further refine our expression to avoid this using various mechanisms explained in the following sections and we’ll return to and refine this example right at the end.

Logical Operators Valid logical operators you can use in expressions are; ▪ ▪ ▪

and (&&) or (||) not (!)

In the examples below you’ll note only and, or and not are used. This is because the character alternatives such as && and || are often interpreted as control or special characters by shells such as bash. These can be escaped with a back slash \ but its far easier to understand when using natural language. Traffic to or from host 100.111.222.50, where the source or destination port is 443. In other words, the sending or receiving host must be 100.111.222.50 and the source or destination port used by at least one of the two hosts must be 443:

host 100.111.222.50 and port 443 Just to be very clear, these flows would be matched by this expression:

100.111.222.50:56844 > 225.11.85.12:443 225.11.85.12:34991 > 100.111.222.50:443 100.111.222.50:443 > 225.11.85.12:44690 Traffic to or from either 100.111.222.50 or 100.11.12.99 (which would include traffic sent between the two):

host 100.111.222.50 or host 100.11.12.99 All traffic except anything to or from host 100.111.222.50:

not host 100.111.222.50 Traffic to or from any host on the 100.111.222.0/24 network except anything to or from host 100.111.222.50:

net 100.111.222.0/24 and not host 100.111.222.50

581 581


It’s slightly counter-intuitive but using logical or usually means the expression is broader and you’ll capture more, using logical and makes it narrower and capture less. This expression will capture traffic (if it’s present) to both 100.111.222.50 and 100.11.12.99 from any source and from both those addresses to any destination:

host 100.111.222.50 or host 100.11.12.99 Whereas this expression will capture only traffic between 100.111.222.50 and 100.11.12.99:

host 100.111.222.50 and host 100.11.12.99 Grouping As shown in that last example, where necessary you can combine or group expressions using multiple logical operators. In the following example, traffic to or from either 100.111.222.50 or 100.11.12.99 will be captured only where the source or destination port is 443:

host 100.111.222.50 or host 100.11.12.99 and port 443 You can also modify the precedence of expressions using round brackets (parentheses). Using the previous example with some brackets changes considerably what is captured. Traffic to or from 100.111.222.50 on any port or traffic to or from 100.11.12.99 where the source or destination port is 443 will match:

'host 100.111.222.50 or (host 100.11.12.99 and port 443)' If you do use parentheses you must enclose the entire expression or at least the parts where they are used in ‘single’ or “double quotes”. You can also escape any brackets or other special characters (such as & or |) with a back slash \ but this is cumbersome. Unless you’re entering a very simple expression it’s a good idea to use quotes around the entire expression as a matter of course so you don’t have to consider whether brackets or escaping might be needed. You’ll now find a series of examples covering most common requirements.

582 582


Single Host One Way Traffic from 1.1.1.1 only:

src 1.1.1.1 Don’t forget the source could be the local host where you are running the capture, outbound traffic or inbound traffic, depending on where you are doing the capture. Traffic to 1.1.1.1 only:

dst 1.1.1.1 You might have noticed by now that we have avoided using received from or sent to, only from or to. This section, or rather the ability to specify a source or destination is the reason why. The language doesn’t work at this point as ‘received from’ doesn’t make sense if the device the capture is performed on is the source of a packet. To a lesser extent ‘sent to’ if the device is the destination. This is particularly relevant if you’re performing the capture on a load balancer, router or other device which forwards traffic. Two Way Traffic to or from 1.1.1.1 only:

host 1.1.1.1 Multiple Hosts One Way Traffic from 1.1.1.1 or from 1.1.1.2:

src 1.1.1.1 or 1.1.1.2 That could be written in its longer form (with the src direction qualifier used twice) like this for clarity:

src 1.1.1.1 or src 1.1.1.2 Traffic to 1.1.1.1 or to 1.1.1.2:

dst 1.1.1.1 or 1.1.1.2

583 583


The following filter is invalid as a single packet can only have one source address:

src 1.1.1.1 and 1.1.1.2 tcpdump: expression rejects all packets Two Way Traffic to or from 1.1.1.1 or 1.1.1.2:

host 1.1.1.1 or host 1.1.1.2

The second host type qualifier is unnecessary as it has been specified for the first address and a different type or direction qualifier isn’t used.

Traffic between 1.1.1.1 and 1.1.1.2:

host 1.1.1.1 and 1.1.1.2 The following filter is invalid as there can only be two addresses (one source and one destination) in an IP header (even if multicast means one host can send data to many at once):

host 1.1.1.1 and 1.1.1.2 and 1.1.1.3 tcpdump: expression rejects all packets Combining Operators Traffic from 1.1.1.1 destined to 1.1.1.2 or 1.1.1.3 only where the source or destination port is 80:

'src 1.1.1.1 and (dst 1.1.1.2 or host 1.1.1.3) and port 80' The host qualifier is unnecessary here as the 1.1.1.3 address can only be a destination because a source has already been specified and there can only be one source. Traffic from 1.1.1.1 destined to 1.1.1.2 or to 1.1.1.3 only where the source or destination port is not 80:

'src 1.1.1.1 and (dst 1.1.1.2 or 1.1.1.3) and not port 80'

584 584


Single Network Usefully network specifications can be shortened based on the prefix; for instance, 192.168.0.0/24 can be specified as 192.168.0/24. However, regardless of which prefix you specify, you must provide the full network address. For example, 192.168/24 will not capture traffic to or from 192.168.0.0/24, neither will a mask of /16 be assumed as you might perhaps expect. In any case, no error will be raised and the expression is accepted. One Way Traffic from hosts with addresses in the 1.1.1.0/24 network only:

src net 1.1.1.0/24 Traffic to hosts with addresses in the 1.1.1.0/24 network only:

dst net 1.1.1.0/24 Two Way Traffic to or from hosts with addresses in the 1.1.1.0/24 network only:

net 1.1.1.0/24 Using the following invalid network mask will give this result:

net 1.1.2.128/24 tcpdump: non-network bits set in "1.1.2.128/24" As the mask used denotes a host address, this shouldn’t work perhaps:

net 1.1.1.0/24 and 1.1.2.128/32 Multiple Networks As noted previously, there’s no need to repeat qualifiers (src and net in this case) where they are the same. One Way Traffic from network 1.1.1.0/24 or from network 2.2.2.0/24:

src net 1.1.1.0/24 or 2.2.2.0/24 Traffic to network 1.1.1.0/24 or to network 2.2.2.0/24:

dst net 1.1.1.0/24 or 2.2.2.0/24

585 585


Traffic to network 1.1.1.0/24 or to network 2.2.2.0/24 only where the source or destination port is 80:

dst net 1.1.1.0/24 or 2.2.2.0/24 and port 80 These flows would be matched by this expression:

100.111.222.50:56844 > 1.1.1.99:80 225.11.85.12:34991 > 2.2.2.100:80 1.1.1.99:80 > 2.2.2.100:80 These would not be (note the third octet of the second expression):

100.111.222.50:8080 > 1.1.1.99:34991 1.1.1.99:80 > 2.2.3.100:80 Two Way Traffic to or from network 1.1.1.0/24 or to or from network 2.2.2.0/24:

net 1.1.1.0/24 or 2.2.2.0/24 Traffic to or from network 1.1.1.0/24 or to or from network 2.2.2.0/24 only where the source or destination port is 80:

net 1.1.1.0/24 or net 2.2.2.0/24 and port 80 These flows would be matched by this expression:

2.2.2.50:56844 > 1.1.1.99:80 225.11.85.12:34991 > 2.2.2.100:80 225.11.85.12:80 > 2.2.2.100:34991 1.1.1.99:80 > 2.2.2.100:34991 These would not be:

2.2.2.50:56844 > 1.1.1.99:81 225.11.85.12:81 > 1.1.1.100:58080 Combining Operators Traffic to or from network 1.1.1.0/24 only where the source or destination port is 80 or traffic to or from network 2.2.2.0/24 (whatever the port):

'(net 1.1.1.0/24 and port 80) or net 2.2.2.0/24'

586 586


The second net type qualifier is necessary as we’ve used a different qualifier (port) after the first net. These flows would be matched by this expression:

1.1.1.50:56844 > 100.111.222.50:80 225.11.85.12:80 > 1.1.1.100:80 100.111.222.50:54011 > 2.2.2.100:34991 1.1.1.99:80 > 2.2.2.100:34991 These would not be:

100.111.222.50:54011 > 2.2.2.100:80 1.1.5.55:80 > 100.111.222.10:34991 Traffic from network 1.1.0.0 to either the 1.1.2.0/24 or 1.1.3.0/24 network:

'src net 1.1.0/24 and (dst net 1.1.2/24 or net 1.1.3/24)'

The dst direction qualifier is unnecessary.

Traffic from either the 1.1.2.0/24 or 1.1.3.0/24 network to the 1.15.0.0/24 network:

'(src net 1.1.2/24 or net 1.1.3/24) and net 1.15.0/24' Specific Protocol Port(s) & Direction As you’ve already seen, we can limit capture traffic to a specific port number, but this can be for any protocol in any direction. We can go further and specify these too, as shown in the following examples: Traffic to or from network 1.1.1.0/24 or to or from network 2.2.2.0/24 only where the source or destination TCP port is 80:

net 1.1.1.0/24 or net 2.2.2.0/24 and tcp port 80 Traffic to or from network 1.1.1.0/24 or to or from network 2.2.2.0/24 only where the source UDP port is 53:

net 1.1.1.0/24 or net 2.2.2.0/24 and udp src port 53 ARP Is covered shortly. SCTP Also appears to be a supported protocol.

587 587


Combining Operators Traffic from host 1.1.1.1 to any destination only where the destination port is either 80 or 443:

src host 1.1.1.1 and dst port 80 or 443 This looks similar but is very different. Traffic from host 1.1.1.1 to any destination only where the destination port is 80 or any traffic where the source or destination port used is 443:

'(src host 1.1.1.1 and dst port 80) or port 443' Traffic from host 1.1.1.1 only where the source port is 5000, to any destination only where the TCP port is 80:

'(src host 1.1.1.1 and src port 5000) and tcp dst port 80' We can shorten this like so:

'(src host 1.1.1.1 and port 5000) and tcp port 80' Port ranges can also be specified:

src host 1.1.1.1 and tcp dst portrange 1024-65535 Address Resolution Protocol (ARP) To capture Address Resolution Protocol (ARP) packets only, simply specify the protocol arp:

host 1.1.1.1 and arp To capture ARP traffic in a specific direction, you must specify the direction along with the host qualifier, not the protocol:

src host 1.1.1.1 and arp This won’t work:

host 1.1.1.1 and src arp tcpdump: syntax error in filter expression: syntax error

588 588


ICMP To capture Internet Control Message Protocol (ICMP) frames only simply specify the protocol icmp:

host 1.1.1.1 and icmp As with ARP, to capture ICMP traffic in a specific direction, you must specify the direction along with the host qualifier, not the protocol:

src host 1.1.1.1 and arp To capture everything but ICMP:

dst host 1.1.1.1 and not icmp To capture ICMP and ARP:

'src host 1.1.1.1 and (arp or icmp)' Refining That First Example Further Remember our first example of running a capture on an F5 device with a virtual server listening to IP address 10.11.12.99? We wanted to observe HTTPS connection traffic to and from a remote host with address 100.111.222.50. This filter will capture the required traffic:

host 100.111.222.50 and port 443 As noted, we might still capture unwanted traffic if the remote host is also connecting to other virtual servers on port 443, on the same device where we’re running the capture. To avoid this we could use some brackets; so that we only match traffic to or from our virtual server address 10.11.12.99 if the source or destination port (in relation to only that address) is 443 and that traffic is to or from address 100.111.222.50::

'(host 10.11.12.99 and port 443) and host 100.111.222.50' These would match this expression:

10.11.12.99:443 > 100.111.222.50:42817 100.111.222.50:42817 > 10.11.12.99:443

589 589


These would not:

10.11.12.99:443 > 200.35.10.187:36925 200.35.10.187:36925 > 10.11.12.99:443 100.111.222.50:42817 > 10.11.12.99:444 100.111.222.50:443 > 10.11.12.99:444 This is and probably good enough in most cases, however, we can go much further if necessary. We can specify TCP only to start with:

'(host 10.11.12.99 and tcp port 443) and host 100.111.222.50' We can then finally be extremely specific about both directions of the traffic flow:

'((src host 10.11.12.99 and tcp port 443) and dst host 100.111.222.50) or (src host 100.111.222.50) and (dst host 10.11.12.99 and tcp port 443)' We could also specify a source or destination for the port but this would be unnecessary.

A Common Example You’ll see the following example frequently used in F5 documentation so we thought it would be useful to explain it in great detail to reinforce what we’ve covered so far:

tcpdump -nn -s0 -i 0.0:nnn -w /shared/tmp/Cxxxx_tcpdump_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pcap host x.x.x.x or host y.y.y.y or host z.z.z.z or arp or icmp The parameters used are as follows; ▪

Both host name and service name resolution are suppressed by the -nn parameter; you’ll see IP addresses and port numbers only. A single n only suppresses host name resolution.

The -s0 specifies that the full contents of each packet are captured, not just the default 96 bytes.

An interface is specified with -i, with 0.0 used here. This denotes all available interfaces. You could also specify a VLAN or physical interface name.

The :nnn suffix specifies the output of additional internal TMM information at a high detail level. Low and medium detail levels are specified with -n and -nn respectively.

Writing the capture to a fail is specified with the -w parameter. A filename is obviously required following this but the directory name is optional. If not specified, the current directory is used. The .pcap extension isn’t required but is useful, especially if you’ll be examining the file on a Windows host.

590 590


The $(date +%d_%b_%H_%M_%S) in the file name specification represents the output of a command which is populated (or evaluated) when the overall tcpdump command is run. In this case it’s the date +%d_%b_%H_%M_%S command, which outputs something like this: 09_Aug_21_04_16.

The $HOSTNAME in the file name specification represents the value of an environment variable and should evaluate to the host name of the device. This environment variable is always available and populated with the appropriate value on an F5 device (or any Linux host for that matter).

The fully evaluated path and file name might look something like this:

/shared/tmp/Cxxxx_tcpdump_09_Aug_21_04_16_f5bigip.pcap The expression consists of; ▪

The IP address of a VIP: x.x.x.x and the IP addresses of two members of the pool assigned to that VIP: y.y.y.y & z.z.z.z.

As the or operator is used, all traffic to or from any of these hosts (on any port, using any protocol) is captured, not just traffic between them. The second and third host qualifiers are not actually required.

Protocols icmp and arp are also specified to provide a wider view of low level network diagnostic information. This may, for instance, highlight use of a duplicate IP address (multiple responses to an ARP request) or errors such as ICMP unreachable messages.

tcpdump Output Generic TCP Here’s a line of output related to an SSH session. Note the -v parameter has been used, without it, the IP header information and some of the TCP information is not displayed.

22:24:18.910372 IP (tos 0x10, ttl 64, id 9792, offset 0, flags [DF], proto TCP (6), length 88) 78.47.105.76.22 > 82.132.219.219.55495: Flags [P.], cksum 0xcb29 (correct), seq 497880562:497880610(48), ack 1593322765, win 379, length 48 There’s a lot of information there so let’s break down the components. The bits and octets listed relate to the size and position of the relevant IP header. ▪

22:24:18.910372 - the datagram’s timestamp

IP (tos 0x10, ttl 64, id 9792, offset 0, flags [DF], proto TCP (6), length 88) - the layer three datagram’s header fields and values; o

tos 0x10 - the IP TOS value (more correctly in the present context, the DS and ECN fields (8bits, 2nd octet)

o

591 591

ttl 64 - the IP time to live (TTL) value (8bits, 9th octet)


o

id 9792 - mostly used for identifying the parts of a fragmented datagram; incremented by one with every packet sent (16bits, 5th and 6th octets)

o

offset 0 - the fragment offset, used with fragmented packets (13bits of the 7th and 8th octet

o

flags [DF] - any IP flags set such as [DF] for Don’t Fragment and [MR] for More Fragments (3bits of the 7th octet)

o

proto TCP (6) - the higher layer (four) protocol and it’s number (8bits, 10th octet)

o

length 88 - the IP packet length in bytes, including all headers (16bits, 3rd and 4th octets)

78.47.105.76.22 - the source IP address and port

82.132.219.219.55495 - the destination IP address and port

Flags [P.] - any TCP flags; a period . indicates an ACK

cksum 0xcb29 (correct) - the packet’s TCP checksum value. You should note on physical appliances that when running tcpdump on a VLAN (including when using 0.0) checksum calculations are performed in software, whereas they are usually performed in hardware. If you suspect a packet corruption issue caused by the BIG-IP, see article K10319: Using the tcpdump utility disables hardware checksum offloading for more information.

seq 497880562:497880610(48) - the TCP packet’s starting and ending sequence numbers, the value in brackets indicates the difference and thus the amount of data carried (in Bytes); this should match the TCP length field

ack 1593322765 - the TCP packet’s acknowledgement number

win 379 - the source host’s TCP window

length 48 - the TCP packet length (in bytes) not including the headers - in other words, the payload or data’s length. This means the IP and TCP headers combined were 40Bytes long.

So you can see the difference and some more fields, here’s a SYN packet - note the extra options in this one (some only seen because it’s a SYN packet) and the length of 0 as no data can be exchanged yet (TCP Fast Open isn’t in use).

10:26:37.855362 IP (tos 0x0, ttl 64, id 52766, offset 0, flags [DF], proto TCP (6), length 60) 172.20.16.98.53726 > 82.132.219.219.https: Flags [S], cksum 0x694e (correct), seq 2720785584, win 29200, options [mss 1460,sackOK,TS val 18361006 ecr 0,nop,wscale 7], length 0 Let’s take a look at those extra options;

592 592


mss 1460 - the maximum segment size (MSS) the host sending the packet supports - only seen in a SYN packet see RFC793 - this is TCP option kind 2

sackOK - indicates the host permits the use of Selective Acknowledgement- only seen in a SYN packet see RFC 2018 - this is TCP option kind 4

TS val 18361006 - the sending host’s timestamp see RFC 7323 - this is TCP option kind 8

ecr 0 - the echo reply timestamp value (the most recent timestamp this host has received from the other end); it’s 0 because this is a SYN packet see RFC 7323 - this is TCP option kind 8

nop - used to align option headers to 32-bit word boundaries by padding 1 byte with 00000001, may be used more than once if necessary see RFC 793 - this is TCP option kind 1

wscale 7 - window scale and value, only seen in a SYN packet see RFC 7323 - this is TCP option kind 3

One more not shown in the example which you might also see: ▪

eol - the ‘end of option list’ option used to indicate the end of all options if necessary; 1 byte of all zeros see RFC 793 - this is TCP option kind 0

Generic UDP It’s odd how often people forget they even use UDP but we all do for voice, video, DNS, DHCP, NTP, VXLAN and the like. Here’s some output related to a DNS query (without using -v). As you can see, without it the IP header information and the UDP information is not displayed;

22:54:40.769351 IP 78.47.105.76.6891 > 213.133.100.100.53: 28642+ AAAA? vps.allenz.eu. (31) Here’s the response with -v used;

22:47:08.352707 IP (tos 0x0, ttl 60, id 1457, offset 0, flags [none], proto UDP (17), length 72) 213.133.99.99.53 > 78.47.105.76.16165: [udp sum ok] 11711 ServFail q: A? 40.1.255.158.bl.tiopan.com. 0/0/0 (44)

593 593


Let’s break down the components once again; ▪

22:47:08.352707 – the datagram’s timestamp

IP (tos 0x0, ttl 60, id 1457, offset 0, flags [none], proto UDP (17), length 72) – the layer three datagram’s header fields and values; o

tos 0x0 – the IP TOS value (more correctly in the present context, the DS and ECN fields (8bits, 2nd octet)

o

ttl 60 – the IP TTL value (8bits, 9th octet)

o

id 1457 – mostly used for identifying the parts of a fragmented datagram; incremented by one with every packet sent (16bits, 5th and 6th octets)

o

offset 0 – the fragment offset, used with fragmented packets (13bits of the 7th and 8th octets)

o

flags [none] – any IP flags set such as [DF] for Don’t Fragment and [MR] for More Fragments (3bits of the 7th octet)

o

proto UDP (17) – the higher layer (four) protocol and it’s number (8bits, 10th octet)

o

length 72 - the IP packet length in bytes, including all headers (16bits, 3rd and 4th octets)

213.133.99.99.53 – the source IP address and port

78.47.105.76.16165 – the destination IP address and port

[udp sum ok] - the datagram’s checksum status

Everything else relates to the DNS application response itself.

Notes on the Protocol Field You can find a full list of protocol number assignments at Wikipedia, but we have collected the most common ones in the following list: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

ICMP (1) IGMP (2) GRE (47) ESP (50) AH (51) EIGRP (88) ETHERIP (97) OSPF (89) VRRP (112) L2TP (115) SCTP (132)

594 594


Notes on Service Ports Valid port numbers are 0 through to 65535. Official Internet Assigned Numbers Authority (IANA) assignments are as follows; ▪ ▪ ▪

0 - 1023 are reserved for well-known applications 1024 - 49151 are registered (with IANA) ports 49152 - 65535 are user and dynamic ports (aka ephemeral or temporary)

Protocol Formatting tcpdump provides data formatting for the following protocols amongst others; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

icmp isakmp arp ntp dns stp hsrp http snmp radius

Here’s a few examples, starting with ARP;

ARP: 22:45:47.220050 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 78.47.108.52 tell 78.47.108.49, length 46 NTP (using -v);

78.47.105.76.ntp > 213.239.239.166.ntp: NTPv4, length 48 Client, Leap indicator: (0), Stratum 3 (secondary reference), poll 10s, precision -22 Root Delay: 0.020477, Root dispersion: 0.056991, Reference-ID: 83.137.98.96

… DNS:

213.133.99.99.domain > 78.47.105.76.16165: [udp sum ok] 11711 ServFail q: A? 40.1.255.158.bl.tiopan.com. 0/0/0 (44) If you see [|proto] at the end of any verbose output, e.g. [|radius] the snap length is too small for the application data to be captured; just increase it (using the -s0 parameter) to see the application data information.

Fragmented Packets Note that when fragmented packets are encountered; ▪

Port numbers are only shown for the first fragment as this is the only one carrying the layer four information

595 595


▪ ▪ ▪

The data size shown is always for the fragment, not the size that will result when the fragments are recombined An offset of @0+ indicates this is the first fragment An offset that doesn’t end with a + indicates this is the last fragment

Using Wireshark Wireshark, previously called Ethereal, was created in1998 by Gerald Combs (who is still the lead maintainer,) is sponsored by Riverbed and can be downloaded here: https://www.wireshark.org/#download. It’s available for Linux, Windows and macOS systems but do note it requires installation of Winpcap on Windows. Just like tcpdump, Wireshark is a free and open source software (FOSS) and like tcpdump it can capture traffic. That’s great and useful in many cases, but in a BIG-IP context, it’s main use is as an analysis tool for captures taken on a device using tcpdump. That’s what we’ll be focusing on here. The main advantage of Wireshark over tcpdump or any other command line tool is its graphical front end. You get the obvious advantage of any GUI tool; colour, easy scrolling and sorting, check-box configuration and more. On top of that Wireshark provides protocol decoding for over 4000 protocols. This makes it very easy to understand the various parameters and operations a protocol uses and performs and it also makes use of various databases to populate other information in its output. Here's a few examples for ARP, HTTPS (TLS) and DNS;

596 596


Not only this but Wireshark also provides analysis and statistics that are very useful. Here’s an example each of the Conversations, Packet Length and IO Graph statistics;

597 597


598 598


Other available features beyond what we’ll cover here include; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Import and export in a wide range of capture formats Export data only, including HTTP objects Printing Packet commenting SSL/TLS Decryption Bluetooth support USB support Capture filters

You can’t run Wireshark directly on a BIG-IP so when you want to use it for analysis you’ll need to use tcpdump with the -w option on the F5 device. You must then use SCP to transfer the capture file to wherever you can run Wireshark. If necessary, refer to the File Transfer section of the BIG-IP Administration chapter for information on how to do so.

Opening Capture Files First up we need to open the capture file, do so by either pressing [CTRL]+O, clicking File > Open or using the toolbar icon, as shown:

You should then browse to and select the file you want to open, as shown:

599 599


Getting Around We have our capture file open, what now? Let's start with the basics; ▪

The first thing you’ll notice is that certain packets are colored; SYN and FIN packets have a dark grey background color, RST packets have a red background and packets that indicate an error or loss, such as a retransmission, have a black background. You’ll also find that different protocols (such as ARP or DNS) have unique background colors.

On the right you’ll see an overview ‘map’ of a larger portion of the capture than you can see in the main window. This provides a wider context of where you are within the overall list of packets as well as a colored indication (using the same scheme mentioned earlier) of current and following packets. This allows you to quickly browse and identify particular clusters and/or types of packets such as errors, at a high level.

It may seem obvious, but it’s worth mentioning the columns in the main window; these allow you to sort the packet list by number, time, source address, destination address, protocol, length and info (which sorts numerically, then alphabetically). This can often help you quickly identify specific packets or flows you are interested in.

In addition, you can add or remove columns according to your needs. To do so, right click on any column and click Column Preferences. You can then deselect or select any pre-existing default or previously configured column as you wish. If you’d like to add a column so you can display or sort data that isn’t available by default or hasn’t been configured before you can click the + button. You should then enter a name (a Title) for the column and double-click on the Type field and select an appropriate data type. This be anything from source or destination MAC (HW) address to source or destination port number.

600 600


Should you wish to inspect a packet more closely and find scrolling up and down in the packet detail window tiresome, simply double-click it for a larger window dedicated to just that packets detail.

The next very useful feature is the ability to ‘follow’ a TCP, UDP, SSL or HTTP stream. This allows you to select any packet and filter the display to show only the communication or ‘flow’ between two endpoints that that packet is a part of, based upon the protocol in use that you select. This allows you to quickly zoom in on a conversation you may be interested in, based on a single interesting packet or alternatively, discount it and investigate the next. To use this feature, select an appropriate packet, right click, select Follow and then select the protocol to be used to identify the stream. As well as applying a display filter, a dialog box will pop up to display all the data exchanged between the hosts in question, with different color applied based on the direction.

As has already been alluded to, Wireshark has the ability to identify network and transport level and other protocol errors. For TCP it does so by analyzing sequence and acknowledgement numbers and applying appropriate logic. It also calculates checksums and for other protocols, checks for conformance to standards, expected fields, data formats, handshake or initiation sequences and so on.

Do keep in mind that there are limitations to this error checking; false positives can occur particularly at the start of a capture where the start of a conversation or flow has been missed. Any kind of network hardware offload can also cause misreporting of errors, particularly with regard to checksums. See AskF5 article K10319: Using the tcpdump utility disables hardware checksum offloading for more information on how to avoid this issue when using tcpdump to take your captures.

Probably the richest and most useful feature of all is Wireshark’s protocol decoding (often referred to as intelligence or deep packet inspection). Rather than having to spend time looking up protocol flags, fields, parameters, headers and formats and their meaning (or memorising such details) Wireshark does it all for you. This information is then presented in an easy to understand and digest format, allowing you to focus on troubleshooting rather than parsing low level detail. As this decoding generally relies upon standard port numbers, we’ll cover how to apply it when using non-standard ports for higher level protocols in a later section.

We’ll now move on to a closer examination of some of these features and more besides.

The F5 Wireshark Plugin Remember the noise amplitude setting you can use with tcpdump to output extra, BIG-IP specific additional information such as which virtual server a packet is processed by? F5 Have kindly written a Wireshark plugin that provides some very useful decoding of that data to help you understand it more easily. Another great example of the power and flexibility of Wireshark and the great work of the DevCentral team. Don’t forget you’ll need to use the noise amplitude parameter when specifying the interface with -i with tcpdump, for instance -i any:nn. Installation If you use or can upgrade to Wireshark v2.6.0 (released April 2018) or later you can skip these installation steps as the f5ethtrailer decoder the plugin provides is included by default.

601 601


If not you'll need a DevCentral account in order to obtain the plugin. You can download it from here: https://devcentral.f5.com/d/wireshark-plugin?download=true&vid=275. The download is a .zip file which contains relevant .dll and .so files for all platforms and major Wireshark versions, in both 32bit and 64bit forms where relevant. Here’s what the contents looks like:

In order to install the plugin, you’ll need to copy the file within the directory most closely resembling your operating system, processor architecture and Wireshark version to your Wireshark user plugin directory. Don’t worry if there’s no match for your particular version of Wireshark, just go for the one closest to it.

602 602


To find out where the user plugin directory is on your platform, open Wireshark and click Help > About Wireshark and then click on the Folders tab. Here’s what I see on a 64-bit Fedora 26 installation of Wireshark v2.2.8:

So, in my case, after closing Wireshark, I copied the f5ethtrailer.so file from the Linux64-2.2.0 folder within the downloaded archive to this directory: /home/sjiveson/.config/wireshark/plugins. In my case the plugins directory didn’t exist, so I had to create it first. Once you’ve done that, you should confirm successful installation by opening Wireshark and clicking Help > About Wireshark and then clicking on the Plugins tab. You should see something like this:

603 603


Usage You should now be able to configure the plugin by clicking Edit > Preferences or pressing [Ctrl]+[Shift]+p and then double-clicking on Protocols and then scrolling down (or pressing f) to F5ETHTRAILER and selecting it. The options look like this:

And finally, here’s what the decoding looks like with a capture file taken with the appropriate parameters:

But wait, there’s more. The plugin also supports display filtering (which we’ll cover in detail soon) based on the additional fields recorded by the noise amplitude parameter. This means, for instance, that you can filter a capture based on the name of a virtual server, the tmm that processed the packet and much more.

604 604


What’s available will of course depend on whether the low, medium or high noise amplitude setting was specified when the capture was taken. The most useful fields are available via f5ethtrailer. Here’s an example of some of them:

Lastly, and I really mean it this time, the plugin also exposes some additional information that is recorded with all tcpdump captures on a BIG-IP, without the need for a specific parameter. This information appears as the first frame/packet in any capture taken on a BIG-IP and provides the hostname, some platform details and tcpdump syntax used to generate the capture. If you would like to know even more, take a look at this DevCentral article: https://devcentral.f5.com/articles/gettingstarted-with-the-f5-wireshark-plugin-on-windows.

Decodes & Non-standard Ports Most application level Wireshark decoders (officially called dissectors) work on the assumption standard (or very common) port numbers are being used. For example, by default, ports 80, 1900, 2869, 2710, 3128, 3132, 5985, 8080, 8088 and 11371 are recognised as HTTP, with only port 443 being used to identify SSL/TLS. In scenarios where non-standard or uncommon ports are in use you’ll find that no decoding is applied or that Wireshark has attempted to use the wrong protocol for decoding. To resolve this common issue temporarily you’ll need to select a suitable packet, right click, click Decode As… and then select the correct source or destination port value as appropriate. Next, scroll right to reveal the Current column and then use the drop-down menu to select the correct protocol to be used for decoding and click OK. Here’s what that dialog (suitably resized) looks like:

605 605


This change applies until you exit the program. To make this change permanent and save yourself time in the future, you can modify Wireshark’s preferences. Click Edit > Preferences or press [Ctrl]+[Shift]+p and then double-click on Protocols and locate and click on the one you wish to modify. Then add or remove the port numbers that should or shouldn’t be used to identify the protocol in question and click OK when you’re done. Here’s an example;

You’ll note there are additional, protocol specific options and parameters that can also be adjusted.

Display Filters As with tcpdump, filtering what’s displayed can be as simple or complex as you need. The Follow feature covered earlier makes filtering a communication flow between two hosts very easy and we would recommend you to use it when that’s what you need and it’s easy to find a suitable packet. If it’s not, there’s a host of parameters and operators to help you find what you’re looking for. Display filters should be entered in the entry bar found below the menus and toolbar, which you can jump to quickly with [Ctrl]+/. A valid filter results in a green background, an invalid filter in a red background. Any filter you specify is not applied until you press [Enter]. You’ll also find there’s a handy syntax helper that offers auto-complete. Finally, be aware when editing filters that you can undo using the standard keyboard shortcut [Ctrl]+Z. Once you’ve applied a filter, use the status bar at the bottom of the main window to give you an indication of how well it’s worked (so far at least) by observing the number of Displayed packets and the percentage of all the packets in the capture they represent.

606 606


We’ll start simply with an example of restricting the packets displayed to those to or from a single host:

ip.addr eq 100.111.222.50 Let’s restrict things to the TCP protocol:

tcp and ip.addr eq 100.111.222.50 Better yet, let’s use a specific TCP port:

ip.addr eq 100.111.222.50 and tcp.port eq 443 Before we go any further, here’s the full list of logical operators; ▪ and (&&) ▪ or (||) ▪ not (!) ▪ eq (==) ▪ != ▪ gt (>) ▪ lt (<) ▪ >= ▪ <= ▪ Contains ▪ matches Let’s use a few and also introduce a network;

ip.addr eq 100.111.222.0/24 and not ip.addr eq 100.111.222.50 You should use any negation as early as possible to avoid issues. For instance, this isn’t valid:

ip.addr eq 100.111.222.0/24 and ip.addr not eq 100.111.222.50 You can also specify a direction:

ip.src eq 100.111.222.0/24 and not ip.src eq 100.111.222.50 As with tcpdump, brackets can be used to indicate precedence; (ip.addr eq 100.111.222.50 and tcp.port eq 80) and (ip.addr eq 100.111.222.50 and tcp.port eq 8080) That’s four eq’s - this isn’t the most efficient syntax and I much prefer the brevity possible with tcpdump expressions.. Let’s have a range of destination ports, say 80 through to 2000:

ip.addr eq 100.111.222.0/24 and tcp.dstport >= 80 and tcp.dstport <= 2000 The following filter is wrong because you cannot specify both UDP and TCP ports:

udp and tcp.port => 80 and tcp.port <= 2000

607 607


Let’s repeat some of our earlier tcpdump examples. Traffic from 1.1.1.1 destined to 1.1.1.2 or 1.1.1.3 only where the source or destination TCP port is 80:

ip.src eq 1.1.1.1 and (ip.dst eq 1.1.1.2 or ip.addr eq 1.1.1.3) and tcp.port eq 80 Traffic from either the 1.1.2.0/24 or the 1.1.3.0/24 network to the 1.15.0.0/24 network:

(ip.src eq 1.1.2.0/24 or ip.addr eq 1.1.3.0/24) and ip.dst eq 1.15.0.0/24 This looks similar but is very different. Traffic from host 1.1.1.1 to any destination only where the destination TCP port is 80 or any traffic where the source or destination TCP port used is 443:

(ip.src eq 1.1.1.1 and tcp.dstport eq 80) or tcp.port eq 443 Traffic from host 1.1.1.1 only where the source port is 5000, to any destination only where the TCP port is 80:

(ip.src eq 1.1.1.1 and tcp.srcport eq 5000) and tcp.dstport eq 80 We can shorten this like so:

(ip.src eq 1.1.1.1 and tcp.port eq 5000) and tcp.port eq 80 A particular TCP stream (conversation) - zero indexed:

tcp.stream eq 0 Just ARP traffic:

arp Just ICMP traffic:

icmp Just ARP or ICMP:

arp or icmp A specific TCP flag:

tcp.flags.reset==1 HTTP Requests only:

http.request A specific host, without the background noise:

ip.addr eq 100.111.222.50 and not (arp or icmp or dns)

608 608


Packets Wireshark has marked as retransmissions:

tcp.analysis.retransmission Red Herrings Wireshark can often report checksum errors in a capture due to a number of factors related to hardware offload and how tcpdump captures packets; the tcpdump syntax used also plays a part. F5 do a very good job of avoiding this by switching to software based calculations wherever possible when tcpdump is run. Even where you do see these errors it’s generally safe to ignore them. However, if you suspect a packet corruption issue caused by the BIG-IP, see AskF5 article K10319: Using the tcpdump utility disables hardware checksum offloading.

Further Reading Four official books can be found here: http://www.wiresharkbook.com/. The most popular is this one: http://www.amazon.co.uk/Wireshark-101-Essential-Analysis-Solutions-ebook/.

Other BIG-IP Tools So, that’s packet capture and analysis in a fair amount of detail. What more could you possibly want? Well… sometimes a high-level view, in conjunction with a low level one, can be quite useful. Looking at a higher level might highlight trends or patterns or correlate with specific events you observe in a capture. General device performance and resource usage data is always something you should have a look at as part of the early stages of any troubleshooting exercise, even if you don’t feel the need to return to it. Whether you do or not will be determined by what you find elsewhere. This information can also be useful in understanding if you’ve hit any license limits. So, let’s cover the other tools available to us that can inform our understanding and help provide context and statistics which might have a bearing on an issue we’re trying to solve. Do bear in mind that there are some nuances to consider when interpreting memory and CPU usage data and if you’re going to draw significant conclusions and/or if in doubt I’d highly recommending reading these two articles from F5: ▪ ▪

K16419: Overview of BIG-IP memory usage K15468: Overview of BIG-IP TMM CPU usage (10.x - 12.x)

Monitors First off, let’s start with the fairly obvious. The status of your nodes, pool members and virtual servers is significant. You may have one of these objects going offline briefly and continuously or regularly without knowing it so keep an eye out. If you suspect this is an issue but can’t be staring at a screen for too long, you can enable monitor status logging so you have a historical record of any monitor status changes that occur. For more information, refer to the Monitor Status Logging section of the BIG-IP Administration chapter.

The Performance Dashboard As the name suggests, the performance dashboard provides a high-level overview of the overall performance and traffic throughput of the device. This is covered in detail in the Identify and Report Current Device Status chapter. The dashboard is available through the menu path Statistics > Dashboard. Here’s what this look like:

609 609


Performance Statistics in the GUI System wide performance data, in the form of RRDtool generated graphs, can be displayed for the last three hours, 24 hours, seven days or 30 days. This data is collected in real time and the GUI display can be manually refreshed or automatically refreshed at 10 to 300 second intervals. All current performance data can also be manually cleared. The available performance graphs are; ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Memory Used (System total, used, TMM allocated and TMM used) CPU Usage (%) Active Connections New Connections (Client accepts and server connects per second) Throughput (bits per second) HTTP Requests (per second) RAM Cache Utilisation (hit, Byte and eviction rate) SSL Transactions (per second)

Further detailed graphs providing per core, per blade, client and server etc. breakdowns can also be displayed by clicking relevant View Detailed Graph links in the GUI. Performance Statistics are available through the menu path Statistics > Performance. Here’s what this look like:

610 610


611 611


Performance Statistics at the CLI If you’re at the CLI already, ready to perform a capture, it’s perhaps easier gaining a view performance and load in the same ‘place’. Overall system resource usage can be displayed at the CLI with this command:

tmsh show sys performance system Sys::Performance System ------------------------------------------------------------------System CPU Usage(%) Current Average Max(since 09/24/17 22:30:20) ------------------------------------------------------------------Utilization 13 4 18 ------------------------------------------------------------------Memory Used(%) Current Average Max(since 09/24/17 22:30:20) ------------------------------------------------------------------TMM Memory Used 14 14 14 Other Memory Used 71 71 71 Swap Used 26 26 26 Connection levels can be viewed with this:

tmsh show sys performance connections Sys::Performance Connections --------------------------------------------------------------------------Active Connections Current Average Max(since 09/24/17 22:30:48) --------------------------------------------------------------------------Connections 5 5 9 --------------------------------------------------------------------------Total New Connections(/sec) Current Average Max(since 09/24/17 22:30:48) --------------------------------------------------------------------------Client Connections 1 1 1 Server Connections 1 1 1 --------------------------------------------------------------------------HTTP Requests(/sec) Current Average Max(since 09/24/17 22:30:48) --------------------------------------------------------------------------HTTP Requests 0 0 0

612 612


Current TMM throughput statistics can be displayed with this:

tmsh show sys performance throughput Sys::Performance Throughput ----------------------------------------------------------------------------Throughput(bits)(bits/sec) Current Average Max(since 09/24/17 22:31:23) ----------------------------------------------------------------------------Service 961 2.4K 22.4K In 113.4K 399.1K 4.7M Out 607 1.1K 2.1K ----------------------------------------------------------------------------SSL Transactions Current Average Max(since 09/24/17 22:31:23) ----------------------------------------------------------------------------SSL TPS 0 0 0 ----------------------------------------------------------------------------Throughput(packets)(pkts/sec) Current Average Max(since 09/24/17 22:31:23) ----------------------------------------------------------------------------Service 2 3 6 In 32 69 530 Out 1 1 3 There are other performance areas to explore including;

▪ ▪ ▪

Dnsexpress Dnssec ramcache

You can view all possible statistics with this command:

tmsh show sys performance all-stats All of these commands support a number of additional parameters; to view data for the last three hours, 24 hours, seven days or 30 days add the historical parameter to the end of your command. To view current total TMM statistics use the detail parameter instead. Then furthermore add the historical parameter (so detail historical) to also view totals for the last three hours, 24 hours, seven days or 30 days.

AVR As discussed in the Introduction chapter this module (should you have it licensed and installed) provides detailed historical and near-time HTTP and TCP/IP related statistics for Virtual Servers, Pool Members, URLs and even specific countries, allowing for in-depth traffic analysis. The available metrics and counters include transactions per second, server latency, page load time, request and response throughput, sessions, response codes, user agents and HTTP methods.

613 613


iHealth BIG-IP iHealth is a free online tool that can be used to check the health, security and configuration of a device and ensure it is running efficiently. The service revolves around Qkview files (uploaded by the user) which can be easily generated on any F5 BIG-IP device using the qkview command. This file contains the device configuration files, logs and other diagnostic command outputs. The iHealth system parses and analyses the contents of the Qkview file and displays any information on identified configuration issues, known issues, common mistakes, software version bugs and best practise guidance, in a friendly, graphical format. Recommended remediation information is also provided along with links to relevant AskF5 articles. The system benefits both F5 and the user; F5 get fewer support calls and users avoids the need for F5 support involvement in basic or commonly occurring issue scenarios. iHealth is updated on a regular basis to take account of new bugs and issues etc. We cover both Qkview files and iHealth in much more detail in the Relevant Information section of the Opening a Support Ticket chapter. You can find the iHealth website here: https://ihealth.f5.com/ and a user guide here: https://support.f5.com/kb/enus/products/big-ip_ltm/manuals/related/bigip_ihealth_user_guide.html.

SNMP Throughput and other statistics are also available via SNMP.

Chapter Summary ▪

It can often be very hard to troubleshoot performance issues and identify their root cause using tools that are useful in other scenarios. For this reason, packet capture is typically the only option left to help us diagnose an issue.

A packet capture, if well specified, will provide us with the necessary detail and data for the entire communication flow. This hopefully allows us to pinpoint the problem, or at least where it occurs; allowing for more focused troubleshooting at that point.

It’s always a good idea to understand the standard and expected TCP/IP behaviour you would see in normal operation. You can’t easily know what is abnormal or unusual without establishing that baseline for comparison otherwise.

Wireshark, previously called Ethereal, was created in1998 by Gerald Combs and just like tcpdump, it’s free and open source software (FOSS) and like tcpdump it can capture traffic. The main advantage of Wireshark over tcpdump or any other command line tool is its graphical front end. You get the obvious advantage of any GUI tool; colour, easy scrolling and sorting, check-box configuration and more.

System wide performance data, in the form of RRDtool generated graphs, can be displayed for the last three hours, 24 hours, seven days or 30 days. This data is collected in real time and the GUI display can be manually refreshed or automatically refreshed at 10 to 300 second intervals.

614 614


Chapter Review 1. When using tcpdump, what option do you need to use in order to specify an interface? a. b. c. d.

-i -w -n -I

2. When using tcpdump, what interface are you capturing traffic on if you are specifying 0.0? a. b. c. d.

eth0 mgmt. All interfaces 1.1

3. When using tcpdump, what option do you need to use in order to save the output to a file? a. b. c. d.

-r -w -nn -m

4. Review the log output, what application protocol is being used? 22:47:08.352707 IP (tos 0x0, ttl 60, id 1457, offset 0, flags [none], proto UDP (17), length 72) 213.133.99.99.53 > 78.47.105.76.16165: [udp sum ok] 11711 ServFail q: A? 40.1.255.158.bl.tiopan.com. 0/0/0 (44) a. b. c. d.

HTTP UDP DNS TCP

5. What tmsh command displays the system performance statistics? a. b. c. d.

tmsh list sys performance system tmsh display sys performance system tmsh monitor sys performance system tmsh show sys performance system

615 615


Chapter Review: Answers 1. When using tcpdump, what option do you need to use in order to specify an interface? a. b. c. d.

-i -w -nn -I

The correct answer is: a -i – Interface -w - Write the raw packets to file rather than parsing and printing them out. -n - Don’t convert addresses (i.e., host addresses, port numbers, etc.) to names. -I - Put the interface in “monitor mode”; this is supported only on IEEE 802.11 Wi-Fi interfaces, and supported only on some operating systems.

▪ ▪ ▪ ▪

2. When using tcpdump, what interface are you capturing traffic on if you are specifying 0.0? a. b. c. d.

eth0 mgmt. All interfaces 1.1

The correct answer is: c An interface argument of any or 0.0 is supported; this captures packets from all interfaces. When using this argument, you should always apply a filter otherwise you might overload the box seeing you are collecting everything that is passing through the device. 3. When using tcpdump, what option do you need to use in order to save the output to a file? a. b. c. d.

-r -w -nn -m

The correct answer is: b Use the -w parameter and specify a file name and optionally a path to the file.

616 616


4. Review the log output, what application protocol is being used? 22:47:08.352707 IP (tos 0x0, ttl 60, id 1457, offset 0, flags [none], proto UDP (17), length 72) 213.133.99.99.53 > 78.47.105.76.16165: [udp sum ok] 11711 ServFail q: A? 40.1.255.158.bl.tiopan.com. 0/0/0 (44) a. b. c. d.

HTTP UDP DNS TCP

The correct answer is: c There are multiple signs that the application is DNS. ▪

Even though DNS can be used over TCP, UDP is most often used. We can see that the proto UDP is being used for this transaction. The host is replying on port 53 (213.133.99.99.53) which is the port for DNS. We can see that the host is replying with an A record of 40.1.255.158.bl.tiopan.com.

▪ ▪

5. What tmsh command displays the system performance statistics? a. b. c. d.

tmsh list sys performance system tmsh display sys performance system tmsh monitor sys performance system tmsh show sys performance system

The correct answer is: d

617 617


20. Opening a Support Case with F5 Opening a support case with F5 support is something you will most likely have encountered before. The problems can be related to hardware, bugs, configuration question, general queries, or even critical incidents where you experience a total outage and are in desperate need of assistance. When creating a support case with the F5 support team, there is certain information that is required and some information that is optional but very helpful for the people working in support. If you’re working at a company where you act as an F5 support partner, the requirements are even higher if you are going to fulfill the agreement you have with F5. In this chapter we’ll discuss: ▪ ▪ ▪ ▪

How you create a support case with F5 support What information you should gather What severity levels a service request can be prioritised as Some of the different tools you can use to collect the required information

Information Required When Opening a Support Case With F5 When opening a support case with F5, there is quite a lot of information that you should include. Some information will be tougher to gather due to the lack of physical access to the device. However, the more relevant information you provide, the faster the F5 Support Engineer can determine the root cause of the problem.

Full Description of the Issue When creating a support case, you will be asked to provide a full description of the problem. The full description includes the following: ▪

What are the symptoms? - Is it loss of client traffic or is it hardware related?

What part of the configuration is affected? - Is it a particular virtual server? If this is the case, what profiles are the virtual servers using? What pools are assigned to the virtual server? Are you using any iRules?

When does/did the problem occur? - Try and specify the times that the problem has happened. That way you can correlate it with the log files.

Did it happen once or multiple times? - Try to specify the frequency of the problem.

Did you receive any errors when the problem happened? - Did the error present itself on the BIG-IP or when the client was accessing a resource through the BIG-IP?

Is the problem reproducible? - Can you reproduce the problem?

Is this a new implementation? - If this a completely new implementation then this problem will most likely be caused by a misconfiguration. F5 does not support new implementations if it’s not in Askf5.

What steps have you performed to resolve the issue? - This step will differ depending if you’re a customer with direct support to F5 or if you’re an F5 support partner. I myself come from the support department of an F5 support partner and we have very high requirements to perform as much as we can to solve the issue on our own.

618 618


If you’re creating a case for BIG-IP DNS (formerly GTM) - Specify how many datacenters and devices that are affected by the problem. add a network diagram

As you can see, there are a ton of questions that can be very useful when determining the issue. If these are answered when the case is created, it will save the F5 support engineer the time to ask you this. Otherwise you will most likely loose valuable support hours where both you and F5 are exchanging information trying to figure out what the problem is.

Severity Levels When creating the support case, you will be asked to set a severity level. The severity level is used to describe the impact of the problem you are having. There are currently four different severity levels which we’ll discuss in the following section. Severity 1 - Site Down This is the highest severity level and this level should be used when you are experiencing a complete outage of your BIG-IP environment, all network traffic has ceased or you’re experiencing a huge business impact. With that said, I personally believe that even a single application can cause a severity 1 if the business impact is big enough. For instance, imagine that you are the owner of the largest online shop in the world, which is currently down and you are losing thousands of dollars every second the application is down. Even if this web shop is only a portion of the complete BIG-IP configuration, the impact is so great that it can definitely be prioritised as a severity 1. To name an example; Amazon®, which is highly dependent on their webservices, lost $4.8M after being down for 40 minutes. That’s $2,000 every second. The initial response time for a Severity 1 is 1 hour. Severity 2 - Site At Risk The second highest severity level is a bit milder, but still very serious. As the name implies, you assign this severity when the site is at risk. For instance, imagine you have lost your primary unit in your HA-pair and are currently only running on one BIG-IP system. If that device goes down, you will have a complete outage. In some cases, it could also be that you’re experiencing an intermittent bug that causes a complete outage and a big business impact that needs a quick resolution. Again, measurement of money is a great way to base the severity on, because when a service goes down it will affect the revenue of the company whether it is an online shop, email services or call centers being down. The initial response time for a Severity 2 is 2 hours. When creating Severity 1 and 2 support cases you are not required to call F5 support but it is highly recommended if it’s an emergency. Which makes sense if you think about it. If you just create a case through the portal or email you cannot guarantee that someone will read the email straight away and assist you. Therefore, it is better to call them and make sure you get help straight away.

619 619


Severity 3 - Performance Degraded This is by far the most common priority that is used. Performance Degraded means that you are partially experiencing problems with your network traffic or low priority applications and the business impact is not that big. An example would be that the memory swap consumption on the BIG-IP device is high (which can be caused by a memory leak) or the BIG-IP system is not logging data as it should. These problems can often times be linked to misconfigurations and known bugs. As in previous examples, the key point is that it has a limited business impact. The initial response time for a Severity 3 is 4 hours. Severity 4 - General Assistance These support cases have the lowest priority and can be related to configuration questions or used to troubleshoot non-critical issues. It can also be used to request new features in F5 products that are currently not yet implemented. To give you an example, perhaps you need assistance with configuring a virtual server or you have a general question regarding the behavior of the BIG-IP system. Coming from a Support Organisation that have been supporting F5 products for some time now, it can be quite difficult to assign the severity of a support case. Quite often, this is performed together with the customer and you mutually agree on the correct severity. Our personal advice that we would like to give you is that you should be cautious with setting the severity too high. We have seen high severity cases being created for applications that are not even in production (in the implementation stage) or regarding non-critical applications. The F5 support does not have an unlimited supply of engineers to assist in the support cases. Therefore, setting the correct severity is highly important and it is a part of this exam. Picture this; you call F5 support and need immediate assistance because your whole BIG-IP environment is down and you are losing $70,000 a minute. When trying to get in contact with an engineer there is either a huge delay or perhaps not even one support engineer available because they are busy solving a severity 1 case with a customer trying to get his new email services running. Setting the correct severity ensures that everyone will get the correct support at the right time. The initial response time for a Severity 4 is next business day or 24 hours depending on your support contract.

QKview A QKview can also be referred to as a tech.out file and is the best starting point for a support case. QKview is an application that is run on the BIG-IP system that will automatically collect diagnostic and configuration information and compresses it into a single archive file (.tgz). The QKview utility can be run on BIG-IP, BIG-IQ, F5 iWorkflow and Enterprise Manager systems. The QKview will for instance contain: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Performance Graphs up to 30 days back. Up to 5 MB of log files (beginning in BIG-IP 10.x) Configuration Objects Serial Number Model number List of provisioned modules Module specific data BIG-IP version that is currently installed

620 620


As you can see, the QKview file contains a lot of the information that the support engineer will indeed ask for. Also, it is very easy to generate and download.

Generating a QKview file In order to generate and download a QKview file from a BIG-IP system, use the following guide: WebGUI (BIG-IP 13.0.0 and later) 1. 2. 3. 4. 5. 6.

Log in to the WebGUI. Navigate to System > Support. Click on New Support Snapshot. In the Health Utility section, select Generate QKview. Click Start. To download the QKview, simply click Download.

WebGUI (BIG-IP 12.1.2 and earlier) 1. 2. 3. 4.

Log in to the WebGUI. Navigate to System > Support and the QKview option should already be selected. Click on Start. This will start the process and it might take a couple of minutes to complete. When it is finished, click Download Snapshot File to download the QKview

Running a QKview from the command line 1. 2.

Log in to the CLI of the BIG-IP system Run the QKview utility by entering the following command:

qkview The filename will be displayed in the command line output once the process has finished. 3.

The QKview will be saved to the default location which is /var/tmp/. Copy it to your local host by using scp or sftp.

You can also specify the file size of the QKview by using the command line option -s. For instance, by using qkview -s0 you specify an unlimited file size.

Generating a QKview on a High Load BIG-IP System There are some scenarios where BIG-IP administrators are afraid to run the QKview utility. It is true that there is a slight impact of using the QKview utility since it utilises a large number of commands in order to collect all of the data that it contains. But this warning is merely for systems that are under heavy load. If the CPU and memory of the device is at normal values then running this command will have very little to no impact. That is why the F5 Support actually requires a QKview for systems where remote access cannot be provided.

621 621


I have come in contact with material from F5 that states that you should only generate a QKview file if asked by an F5 Support Engineer. This is an old document and the previous statement stands, if you cannot provide F5 with remote access to the BIG-IP system, then you are required to provide a QKview. In the cases where you are afraid that the QKview utility will overload the box, you can use the following command in order to run the process with the lowest priority:

nice -n 19 qkview When this command is used on systems under heavy load, the QKview utility might take a very long time to finish.

iHealth Since the QKview file is a compressed archive, you can actually unpack it and view its content manually. But there is a much better way to use the QKview file, and that is iHealth. This is like the holy grail of troubleshooting tools. iHealth is a webpage provided by F5 that you can upload your QKview file to. iHealth It unpacks the Qkview file and conveniently sorts out all the data to make it browsable and searchable.

622 622


Using iHealth you can: ▪

Review the overall status of the device - Here you can view the serial number of the device, the currently installed version, which modules are provisioned, what high-availability status it has, the current license and the hardware specifics.

Review Graphs - These graphs can be based upon CPU usage, memory used, throughput, the amount of total new connections, ACL actions and HTTP request just to mention a few. The graphs are based upon data collected from the moment the QKview was generated and up to 30 days back. This is a great way to check the overall health of the device but also useful when troubleshooting, for instance, a memory leak.

Review the current configuration - In this tab you can browse the configuration which includes everything from monitors, nodes, pools and virtual servers. But also, unused objects and the network objects such as interfaces, NATs, self-IPs and VLANs.

Run commands - This is also a great way to troubleshoot a BIG-IP system that you do not have remote access to. The commands section lets you run commands like you would if you’d had access to the device. You can either select from a pre-compiled list or enter “Shell” and type them yourself.

Run Diagnostics - This section focuses on the current version of TMOS and matches it against known bugs. It also reviews the log files and current configuration to find errors that can be fixed either by correcting the configuration, upgrading or applying a workaround. The issues are categorised as; Critical, High, Medium, or Low.

Review Files - This section will let you open up files collected from different locations of the BIG-IP system. It will gather files from /config/, /var/log/, /etc/ to mention some. This gives you the possibility to dive into every log file on the system. However, the BIG-IP system only collects 5MB worth of log files and some may be truncated and will have to manually retrieved from the device.

Module Specific Tabs - iHealth also have module specific tabs where you can get some additional information. For instance, if you have APM provisioned, it will present an Access Policy tab where you can view current access profiles.

Using QKviews and iHealth aren’t only good for troubleshooting specific issues but also for a proactive measure in order to determine the overall health, detect problems and determine if you are in need of an upgrade. To use iHealth, you will need to have an F5 account (the same one used for AskF5). This account is free and can be created by anyone. Simply go to ihealth.f5.com and logon/create an account and upload a QKview file and you’re up and running. iHealth is also useful when creating a case as you can add the support case number when uploading the QKview. That way, the F5 support can easily get a hold of it without having to download it.

623 623


Log Files As mentioned earlier, the QKview only contains 5MB worth of log files. This means that you sometimes need to extract the log files from the BIG-IP system. F5 recommends that if the issue has existed for more than a day then you should always include a compressed archive of the log files when creating the case. To retrieve the log files, use the following instructions: 1. 2.

Log on to the CLI of the BIG-IP system. Create a compressed archive of the log files by entering the following command:

tar -czpf /var/tmp/logfiles.tar.gz /var/log/* 3.

Download the logfiles.tar.gz from the BIG-IP system using either SFTP or SCP.

My personal advice is to always include the log files in order to make sure that you do not miss the data. Better to be safe than sorry.

Packet Traces (tcpdump) We discussed how to collect and when you should collect a packet trace in an earlier chapter. Therefore, we’ll not go into any detail here in this chapter. Just remember that an F5 engineer might ask for one.

SSL Dump We have not discussed this troubleshooting tool in this book, since it considered to be fairly advanced and beyond the scope of the exam. Sometimes you are troubleshooting applications that are communicating through encrypted packet streams. When collecting a tcpdump from these applications the data will be encrypted and you will have no idea how the BIG-IP system is handling the traffic. Using the tool ssldump you can examine, decrypt and decode the SSL packets, revealing the application data. This is something the F5 Support Engineer might ask for when troubleshooting a support case. To learn more about ssldump refer to the following solution article: K10209: Overview of packet tracing with the ssldump utility.

UCS Archives Up until now we have not yet discussed what UCS (User Configuration Set) nor SCF (Single Configuration File) really is and we are still withholding this as it is covered in the Maintain Configuration Chapter. To give you a short description of it, UCS is a compressed archive that contains a snapshot of the BIG-IP system. It contains all of the configuration files, the BIG-IP license, User Accounts and their passwords. It will also contain the SSL certificates that you have uploaded to the device (including private keys if not selectively excluded). There are times where the F5 support requests a UCS archive in order to replicate a certain problem or if there are no possible way to provide the support with remote access to the BIG-IP system. Keep in mind, when generating a UCS archive and uploading it to the F5 support, make sure it is encrypted (secured with a password) and that it does not include the SSL private keys.

624 624


Core Files When the BIG-IP system experience a complete crash, it can in some cases generate a core file. This core file is a memory dump that the F5 Support Team can analyze in order to discover the cause of the crash. These files are located under: ▪ ▪

BIG-IP 9.0 through 9.2.5 - /var/savecore BIG-IP 9.3 and later - /var/core

If the BIG-IP system has generated a core dump in relation to your problem (around the same time you experienced the problem), these should always be provided to F5 support. Unless F5 support state otherwise. When providing the F5 support with core dumps you should always run an md5 checksum on the coredump file. That way, the support can guarantee that it has not been corrupted while being uploaded. You can run an md5 checksum from the CLI by entering the following command in bash: md5sum [coredump_filename]. You should also provide them with a QKview and logs so that they can correlate the core dump with the logs of the BIG-IP system.

Assembling an Accurate Problem Description Providing an accurate problem description can sometimes be difficult. Perhaps when you are troubleshooting, your BIG-IP system you are seeing log entries that looks suspect which you associate with the problem; but in the end, they did not have anything to do with the problem. In fact, the log entries existed long before the current issue happened. In the following section, we’ll discuss some of the different observation techniques and what information is relevant to form an accurate problem description.

Quantitative Vs. Qualitative Observations When you are experiencing an on-going issue, there are two ways of observing a problem. Quantitative observations are based upon precisely measured and accurate data. For instance, if your web application takes 5 more seconds to load than usual and when troubleshooting you discover that it is even a particular object that is the cause of it. In summary, this observation method is based on facts. Qualitative observations on the other hand are not based upon facts but rather on feelings. Reusing the previous example, instead of stating the exact seconds it took to load the application, we instead state that the “web application seems to be taking longer than usual to load”. This observation method is by far the most common one since most people do not measure the amount of time the web application takes to load and you do not have a real baseline of how long it should take. However, as you can imagine, providing a quantitative observation is much more preferable as you base your problem description on facts and not feelings. Therefore, when creating a support case with F5 support, try and provide a quantitative observation rather than a qualitative.

625 625


Relevant Vs. Irrelevant Information This might seem like a very obvious statement. Of course, you should provide information that is relevant to the problem but sometimes it isn’t that easy. Using my previous example, when you are troubleshooting an issue you might see strange log entries, but they are not relevant to the problem you are experiencing. If you provide the F5 support with this information they might waste time investigating the wrong issue. This section of the chapter is very scenario based and it is impossible to draw up all the different ones that you might experience. But use the troubleshooting tools we have provided throughout this part of the book and gather the information which you strongly believe is connected to the problem. Providing too much information can be equally as bad as providing too little. Stay relevant to the problem and try to avoid as many rabbit holes as you can. As a final note, remember that you always need a valid support subscription in order to get assistance from the F5 support. If you have a support agreement with an F5 partner you can still get assistance as long as that support does not have to contact F5. Therefore, always follow up on the support agreement whenever it is time to renew it or you will have to suffer from the consequences whenever an F5 support case is required.

How to Open a Support Case with F5 Support To be able to create a support case with F5 support, you will need have an account at support.f5.com with the serial numbers of the F5 devices linked to this account. The serial number of the device is used to validate the support contract which needs to be active for that F5 device. If you do not have this, then talk with your F5 representative or F5 support partner and they will be able to assist you. When you have successfully logged on to the portal, click on the Create service request.

626 626


This will navigate you to a new page where you will first have to enter some product information which includes: ▪

▪ ▪

Problem Statement - This is a brief description of the current issue. This is used as the title of the service request. When you enter a problem statement, the F5 support portal will try and match the text with already existing AskF5 articles. This is because your question/problem may have already been solved. Product - Select which product the service request is regarding. Version - Select the version of the operating system that the device is running.

When you have entered this information, move on to the next page by clicking Next. This will load the Problem Details page. On this page, you will further explain the problem and you enter the following information: ▪

Serial Number, Registration Key or Parent System ID - In order for the F5 Support to assist you, you will need to have an active support contract. This information is required in order to create a service request.

Severity - Specify the severity of the service request. What severity you should specify has been discussed previously in this chapter.

Problem Details - This is a longer explanation of the problem.

Was this working before? - If it has never worked before it is most likely caused by a misconfiguration. This is very relevant for the support to know. If it has worked before it will trigger some more questions that you need to answer.

Is the problem related to a virtual server? - This is will help the F5 support engineer to quickly identify which virtual server is affected by the problem and what configuration objects that are assigned to this virtual server.

627 627


You might get prompted with an error when adding your serial number to the service request. The reason for this is most likely that the serial number of the device is not assigned to your account. Call the F5 support and they will be able to assist you. When you have entered all of the information that you can, click Next to continue to the Contact Information page. On this page, you enter the contact information that F5 can use to get in contact you in order to retrieve more information and ask questions. You specify the following information: ▪

Name - Usually greyed out and unchangeable. This is linked to the account that you have logged on with.

Customer internal tracking (if available) - If F5 support calls you, they can refer to the service request number that you use in your own case management system.

Phone - Phone number they can reach you at

Mobile - Alternative phone number.

Email - The email where F5 support can reach you at.

Preferred method of contact - Either Email, Phone or Mobile. Defines how you would like F5 support to contact you.

Your time-zone - Helps F5 support with assigning the service request to an engineer that works in the same time-zone as you.

Is there an alternate contact for this request? - Very useful if you have an on-site contact that are currently assisting with the troubleshooting. That way he can be completely informed regarding the progress of the service request.

When you have entered all of the information, create the service request by clicking Submit. After you have created the service request you will receive the service request number which has the following format: C####### where # is a random number. It can for instance look like this, C3412697. You will also receive a confirmation via email if you added this correctly to the service request. The service request number can be different depending on how you create the case. For instance, if you create the case by calling F5 support and the support agent creates the service request.

Escalation Methods If at any time you believe that a case is not being handled in accordance with the current severity level or if the problem has gotten worse, you can always call F5 support and have them increase the severity level. If you need an F5 Network Support Engineer to work on the case right away and your currently assigned engineer is out of the office, you can always re-assign the case to a new engineer as long as one is available. If you at the same time increase the severity it will increase the likelihood of finding an engineer but remember what we have mentioned before.

628 628


Do not set a case to a severity level that is not relevant for the problem/impact that you are experiencing. An environment that is still being designed and configured (which have no clients’ utilising it) can never be assigned a severity 1 or 2. Working in a support organisation, you have certain regulations and routines that should be followed in order to provide the best support for all customers. This includes setting the correct severity level, requesting the correct information, and escalating the case to the correct teams. If you feel that your case is not being properly addressed or worked on by a Network Support Engineer, please contact F5 Support and request to speak with a Duty Manager. Even though they work under the same routines, explaining the situation for a duty manager can help your case. For instance, you have a support case which has been escalated to the engineering team which is taking a long time to find the root cause and solution to the problem. The problem itself is intermittent but when it happens it causes severe outages and you are losing money. The Network Support Engineer has done the right thing to escalate the case to engineering team, but you feel it is taking too long to find the root cause. If you feel that the case is taking too long to solve the Network Support Engineer does not really have any mandate to re-assign engineers and add more resources to the case. The Duty Managers do not have infinite amount of power, but they have a higher mandate than the support engineers and thus, can assist with escalating the case.

Chapter Summary ▪

Opening a support case with F5 support is something you will most likely have encountered before. The problems can be related to hardware, bugs, configuration question, general queries, or even critical incidents where you experience a total outage and are in desperate need of assistance.

When creating a support case, you will be asked to provide a full description of the problem. This should include the symptoms of the problem, when the problem occurs, what error message are you receiving etc.

A QKview can also be referred to as a tech.out file and is the best starting point for a support case. QKview is an application that are run on the BIG-IP system that will automatically collect diagnostic and configuration information and compresses it into a single archive file (.tgz).

The QKview only contains 5MB worth of log files. This means that you sometimes need to extract the log files from the BIG-IP system. F5 recommends that if the issue has existed for more than a day then you should always include a compressed archive of the log files when creating the case.

When the BIG-IP system experience a complete crash, it can in some cases generate a core file. This core file is a memory dump that the F5 Support Team can analyze in order to discover the cause of the crash.

Quantitative observations are based upon precisely measured and accurate data. For instance, if your web application takes 5 more seconds to load than usual and when troubleshooting you discover that it is even a particular object that is the cause of it. In summary, this observation method is based on facts.

Qualitative observations on the other hand are not based upon facts but rather on feelings. Reusing the previous example, instead of stating the exact seconds it took to load the application, we instead state that the “web application seems to be taking longer than usual to load”.

629 629


Chapter Review 1. You need to open up a support case with the F5 support. You are currently experiencing problems with one of your BIG-IP devices and it is not responding nor booting up. You are currently passing traffic through the other BIG-IP device and client traffic is not affected. What severity level should you assign to the F5 support case? a. b. c. d.

Severity 1 Severity 2 Severity 3 Severity 4

2. You need to open up a support case with the F5 support. You would like to upgrade your BIG-IP system to a newer TMOS version but you are unsure of which version you should upgrade to. What severity level should you assign to the F5 support case? a. b. c. d.

Severity 1 Severity 2 Severity 3 Severity 4

3. In BIG-IP 9.3 and later, at what location does the BIG-IP system store core files? a. b. c. d.

/shared/core/ /var/core/ /var/savecore/ /log/savecore/

4. What do you need to have in order to open a support case with F5? a. b. c. d.

The serial number of the BIG-IP system A QKview which you upload to the F5 support An active support contract A UCS archive containing the configuration which you upload to the F5 support

630 630


631 631


Chapter Review: Answers 1. You need to open up a support case with the F5 support. You are currently experiencing problems with one of your BIG-IP devices and it is not responding nor booting up. You are currently passing traffic through the other BIG-IP device and client traffic is not affected. What severity level should you assign to the F5 support case? a. b. c. d.

Severity 1 Severity 2 Severity 3 Severity 4

The correct answer is: b You assign Severity 2 when the site is at risk. When you lose a BIG-IP system that is in a HA-pair (two systems) you will automatically be assigned at least Severity 2 as the site is immediately at risk. If the other BIG-IP device fails as well, you will have a complete outage. Therefore, you should act quick and replace the faulty device. 2. You need to open up a support case with the F5 support. You would like to upgrade your BIG-IP system to a newer TMOS version but you are unsure of which version you should upgrade to. What severity level should you assign to the F5 support case? a. b. c. d.

Severity 1 Severity 2 Severity 3 Severity 4

The correct answer is: d Severity 4 support cases have the lowest priority and can be related to configuration questions or to troubleshoot noncritical issues. It can also be used to request new features in F5 products that are currently not yet implemented. Getting upgrade advice from F5 support could be one of them. 3. In BIG-IP 9.3 and later, at what location does the BIG-IP system store core files? a. b. c. d.

/shared/core/ /var/core/ /var/savecore/ /log/savecore/

The correct answer is: b In BIG-IP 9.3 and later, the core files are stored under /var/core/. In previous version it was stored under /var/savecore/

632 632


4. What do you need to have in order to open a support case with F5? a. b. c. d.

The serial number of the BIG-IP system A QKview which you upload to the F5 support An active support contract A UCS archive containing the configuration which you upload to the F5 support

The correct answer is: c This is kind of a trick question. Yes, you will indeed need a serial number in order to create a case with the F5 support. However, if there are no active support contract assigned to that serial number, you will not be able to open a support case.

633 633


21. Identify and Report Current Device Status In this chapter, we discuss the dashboard which displays live performance data for the BIG-IP system and how this can be used to troubleshoot on-going issues. We’ll also present log snippets of common scenarios and explain what is happening. After that we’ll take a look at Analytics (also known as the Application Visibility and Reporting (AVR) module) and how this can be used to gather and review data collected on the BIG-IP system.

The Dashboard The BIG-IP system has a built-in dashboard located in the WebGUI which displays overall system performance and performance for specific modules. It will display the data graphically in line charts or in a throttle ranging from 0 to 100%. The information is gathered from the system every 3 seconds. To use the dashboard, you will need to have Adobe® Flash® Player (version 9 or later) installed on the PC trying to launch it. You access the dashboard by navigating to Statistics > Dashboard. The official statement from F5 states that you need Adobe Flash Player in order to launch the dashboard. While that is true, since Adobe Flash Player contains a lot of security flaws, many organisations/companies do not allow you to install it. Our own recommendation is to install the web browser Google Chrome and allow flash to run for your BIG-IP WebGUI address.

When clicking the icon looking like a table, (of the different sections such as CPU, Memory, and Connections) you can view the data in a table format instead.

634 634


For instance, in this table you will be able to see the current CPU usage of each core and the fan speed/state and temperature information. In the memory section, you can view the current memory utilisation broken down into a few different sections: TMM Used, TMM Free, Other Used, and Other Free. You will see the data presented as a throttle, line charts and as blocks and chunks.

Statistics are only stored for 30 days.

Previously in this book we discussed alerts and that these will be printed out to the LCD screen. These alerts will also be displayed in the Alerts section of the dashboard. When clearing the alerts either through the LCD screen or using the CLI, they will also disappear from the dashboard. Depending on which modules you have provisioned, you will be able to select different views in the dashboard and present various types of data. On my BIG-IP system I have provisioned APM which gives me the ability to view Active and New Sessions, Network Access Throughput, Portal Access RamCache utilisation and ACL Actions. When selecting the LTM view, I will be able to view the current availability of my virtual servers, nodes, and pools along with the current connection count towards specific virtual servers.

635 635


The Dashboard is a quick and easy way to view the overall health of the BIG-IP system and it can be useful in troubleshooting purposes. Using this tool, you can discover odd behavior such as a sudden increase in new connections, CPU spikes and upcoming memory exhaustion (increased swap usage). However, remember that this data is relevant to previous values and if you do not have a baseline, a sudden increase in connections may not necessarily mean a bad thing. On the exam, you will be presented with a dashboard with certain data. It can be an increase in throughput usage along with a CPU spike. The question will be based on a scenario that you will have to understand and present an explanation to the sudden change.

Interpreting Log Files Troubleshooting is a huge part of the 201 exam and log files are one of the best sources to finding critical data that will either point you towards the right direction or explain the issue. However, reading log files is not an easy task, especially if you are new in the IT business. The main reason for this is that the log will most likely contain loads of data and sorting out the log entries that actually matter might be difficult. This is because the devices will log data that are part of common, day to day operations and not related to the problem. This might mislead you. In this section, we’ll provide you with a few log examples of common scenarios and explain what happens along with a probable cause.

All of the log snippets in this chapter originates from the log file /var/log/ltm.

Health Monitor Failure Health monitor failure is something you will definitely come in contact with (if you haven’t already) as monitors tend to fail from time to time. Let’s review a log snippet of a health monitor failing:

Sep 7 21:16:16 bigip1 notice mcpd[6959]: 01070638:5: Pool /Common/http_pool member /C ommon/172.16.100.2:80 monitor status down. [ /Common/http: down; last error: /Common/h ttp: Unable to connect; No successful responses received before deadline. @2017/09/07 21:16:16. ] [ was up for 0hr:0min:45sec ] Sep 7 21:16:18 bigip1 notice mcpd[6959]: 01070638:5: Pool /Common/http_pool member /C ommon/172.16.100.3:80 monitor status down. [ /Common/http: down; last error: /Common/h ttp: Unable to connect; No successful responses received before deadline. @2017/09/07 21:16:18. ] [ was up for 0hr:0min:46sec ] Sep 7 21:16:19 bigip1 notice mcpd[6959]: 01070638:5: Pool /Common/http_pool member /C ommon/172.16.100.1:80 monitor status down. [ /Common/http: down; last error: /Common/h ttp: Unable to connect; No successful responses received before deadline. @2017/09/07 21:16:19. ] [ was up for 0hr:0min:49sec ]

636 636


First, we see that the monitor for the particular pool member received the status of “down”. As you might remember from the monitors chapter, this can be caused either because the monitor receives a reply that does not match the Receive String or it has not received a reply before the timeout value has been reached. In the log entry, it will state before deadline instead, which is the same thing as the timeout value. The reason we see three log entries is because we have at least three pool members in our pool and the monitor assigned to those pool members have gone down.

Sep 7 21:16:19 bigip1 notice mcpd[6959]: 01071682:5: SNMP_TRAP: Virtual /Common/vs_http has become unavailable In the next section, we can see that the virtual server named vs_http has become unavailable. What this tells you is that this virtual server does not have any more pool members available in the default pool that we have assigned. This does not mean that all the pool members have monitors being marked as down. It could be that the BIG-IP administrator has marked a pool member as disabled or forced offline, but the status of the pool member is online.

Sep 7 21:16:19 bigip1 notice mcpd[6959]: 010719e7:5: /Common/10.10.1.100 general status changed from GREEN Sep 7 21:16:19 bigip1 notice mcpd[6959]: 010719e8:5: /Common/10.10.1.100 monitor status changed from UP to

Virtual Address to RED. Virtual Address DOWN.

In this section, you can also see that the Virtual Address receives a change in its status. The general status goes from GREEN to RED and the monitor status is changed from UP to DOWN. As we have mentioned previously in this book, whenever you create a virtual server it will create a virtual address that gets assigned to the traffic-group. It is the virtual address that is actually failed over between the BIG-IP systems when they are configured in an HA pair. Keep in mind that, when you see this log message in accordance with a virtual server becoming unavailable, this means that there is no other virtual server configured with the same virtual address.

Sep 7 21:16:19 bigip1 err tmm1[10715]: 01010028:3: No members available for pool /Common/http_pool Sep 7 21:16:19 bigip1 err tmm[10715]: 01010028:3: No members available for pool /Common/http_pool These two log messages already confirm what we already figured out through the previous messages but is a good confirmation that there are no longer any available members in the http_pool which is assigned to the virtual server named vs_http. The reason why we see two identical log messages is because of Clustered Multi-Processing (CMP). This technology enables the BIG-IP system to run two TMM processes per CPU core. In my case I’m running two TMM processes. The TMM processes are responsible for handling the application traffic and they have their own separate connection tables. When traffic arrives on the BIG-IP system it will load balance the traffic between the TMM processes when CMP is enabled. This means that both of them need to keep track of their own monitoring since they are isolated from each other. Therefore, for each TMM process you will see one log message.

637 637


Remember that the BIG-IP system runs multiple TMM processes due to CMP and that each TMM process is responsible for its own health monitoring.

High Availability Communication Failure In the following section, we’ll show you a log snippet of a High-Availability pair where the communication channel has failed. Let’s break it down into several sections:

Sep 7 23:26:33 bigip1 warning sod[4629]: 010c0083:4: No failover status messages received for 3.100 seconds, from device /Common/bigip2.f5lab.com (192.168.1.246) (unicast: -> 20.20.20.10). Sep 7 23:26:33 bigip1 notice sod[4629]: 010c007e:5: Not receiving status updates from peer device /Common/bigip2.f5lab.com (192.168.1.246) (Disconnected). First the switch over daemon (sod), which is responsible for the failover function between BIG-IP devices, fails to receive a status message from the peer BIG-IP device. This essentially tells the sod process that the bigip2.f5lab.com is no longer available to handle client traffic. So far, we are uncertain as to whether there are any traffic-groups assigned to bigip2.f5lab.com.

Sep 7 23:26:33 bigip1 notice sod[4629]: 010c006d:5: Leaving Standby for Active: Next Active, peers agree on config. Sep 7 23:26:33 bigip1 notice sod[4629]: 010c0053:5: Active for traffic group /Common/traffic-group-1. Sep 7 23:26:33 bigip1 notice sod[4629]: 010c0019:5: Active Sep 7 23:26:33 bigip1 notice tmm1[10715]: 01340011:5: HA unit 1 state change: from 0 to 1. Sep 7 23:26:33 bigip1 notice tmm[10715]: 01340011:5: HA unit 1 state change: from 0 to 1. At this moment, the sod process will evaluate its failover capability through its high-availability table and determine the next available device to handle client traffic. In the log, we can see that the current BIG-IP system where we are logged on to (bigip1) leaves its standby mode and enters the active mode. Now we can draw a somewhat accurate conclusion that trafic-group-1 was assigned to bigip2 because bigip1 now becomes active for that traffic-group. But this really depends on the current design of the environment (how many BIG-IP systems are there and how many traffic-groups are configured). We can also see that the TMM processes also changes their HA state from 0 to 1.

Sep 7 23:26:33 bigip1 notice tmm[10715]: 01340007:5: HA Connection with peer 20.20.20.20:32773 for traffic-group /Common/traffic-group-1 closing. Sep 7 23:26:33 bigip1 notice tmm1[10715]: 01340007:5: HA Connection with peer 20.20.20.20:32772 for traffic-group /Common/traffic-group-1 closing. In the High-Availability chapter, we discussed the Centralised Management Infrastructure (CMI) channel which the tmm processes establish with each other between multiple BIG-IP systems in order to assist the local and remote mcpd processes to communicate with each other.

638 638


One of the reasons is to synchronise the configuration. In these log entries, we can see that each tmm process closes its HA connection with the peer tmm processes on the failed BIG-IP system. Since the CMI channel is broken due to the failure of the peer BIG-IP system the connections fail as well.

Sep 7 23:26:52 bigip1 notice mcpd[6959]: 0107143c:5: Connection to CMI peer 20.20.20.20 has been removed Sep 7 23:27:55 bigip1 err mcpd[6959]: 0107142f:3: Can't connect to CMI peer 20.20.20.20, port:6699, Transport endpoint is not connected Continuing the previous statements, since mcpd uses the local tmm processes to establish a connection to the remote mcpd process, when the CMI channel is down there is no way for the mcpd processes to communicate which leads to the removal of the connections. You can also see that it is indeed using port 6699 to establish a connection which is sent to the local tmm process, translated to port 4353 which is established to the remote tmm process. When the mcpd loses its CMI channel it will retry ever 5 seconds until it is back up again. Reviewing all the log entries, there is actually a lot that happens when the HA communication fails. There can be many reasons for failure, below is a list of just a few examples: ▪

The remote HA pair has shut down for an unforeseen reason.

If the HA traffic is passed through a switch, that switch might experience issues. Either it is completely down or the port where the traffic is sent through is down.

The HA communication might be sent through a firewall that is blocking the traffic.

If the HA pairs spans over a large distance, the delay between the CMI hello packets might be too long causing a timeout for the sod process.

If you receive the following messages in conjunction with failed HA communication, then you are in big trouble:

Sep 7 23:28:26 bigip1 warning tmm1[10715]: this invocation; held 15 messages. Sep 7 23:28:26 bigip1 warning tmm1[10715]: 10.10.1.100 (00:0c:29:10:52:de) on vlan 0 Sep 7 23:28:26 bigip1 warning tmm1[10715]: 10.10.1.33 (00:0c:29:10:52:de) on vlan 0 Sep 7 23:28:26 bigip1 warning tmm1[10715]: 10.10.1.220 (00:0c:29:10:52:de) on vlan 0 Sep 7 23:28:26 bigip1 warning tmm1[10715]: 10.10.1.100 (00:0c:29:10:52:de) on vlan 0 Sep 7 23:28:26 bigip1 warning tmm1[10715]: 172.16.1.33 (00:0c:29:10:52:e8) on vlan 0

01190004:4: Resuming log processing at 01190004:4: address conflict detected for 01190004:4: address conflict detected for 01190004:4: address conflict detected for 01190004:4: address conflict detected for 01190004:4: address conflict detected for

What this essentially means is that both BIG-IP systems are ARPing for the same IP addresses. Ask yourself, at this moment, what is the current status of both BIG-IP systems? If you thought Active/Active then you are completely right.

639 639


Right now, the two BIG-IP devices are having a ping pong match fighting to become the active member since both devices cannot communicate with each other and assumes that their peer is disconnected. If this happens then you will have a complete outage and no applications will be able to pass through the BIG-IP environment. When the HA communication is restored it will look like this:

Sep 7 23:28:54 bigip1 warning sod[4629]: 010c0084:4: Failover status message received after 143.800 second gap, from device /Common/bigip2.f5lab.com (192.168.1.246) (unicast: -> 20.20.20.10). Sep 7 23:28:54 bigip1 notice sod[4629]: 010c007f:5: Receiving status updates from peer device /Common/bigip2.f5lab.com (192.168.1.246) (Online). Sep 7 23:28:56 bigip1 notice sod[4629]: 010c004a:5: Leaving active in favor of active peer. Sep 7 23:28:56 bigip1 notice sod[4629]: 010c0052:5: Standby for traffic group /Common/traffic-group-1. Sep 7 23:28:56 bigip1 notice sod[4629]: 010c0018:5: Standby After 143 seconds, we finally receive a response from the peer device and we also receive the status update that the device is Online. At this moment, bigip1 leaves the active state in favor of bigip2, since it is already online and was so before the failover started.

Sep 7 23:28:56 bigip1 notice tmm[10715]: 01340011:5: HA unit 1 state change: from 1 to 0. Sep 7 23:28:56 bigip1 notice tmm1[10715]: 01340011:5: HA unit 1 state change: from 1 to 0. The TMM processes changes its HA state back to 0 (standby).

Sep 7 23:28:56 bigip1 notice tmm[10715]: 01340001:5: HA Connection with peer 20.20.20.20:32773 for traffic-group /Common/traffic-group-1 established. Sep 7 23:28:56 bigip1 notice tmm1[10715]: 01340001:5: HA Connection with peer 20.20.20.20:32772 for traffic-group /Common/traffic-group-1 established. Sep 7 23:28:58 bigip1 notice mcpd[6959]: 01071432:5: CMI peer connection established to 20.20.20.20 port 6699 after 2 retries Sep 7 23:28:58 bigip1 notice mcpd[6959]: 01071451:5: Received CMI hello from /Common/bigip2.f5lab.com And the CMI channel is restored since we have received CMI hello packets from the peer BIG-IP system.

640 640


If you review the following log, what caused the failover in this scenario?

Sep 7 22:55:40 bigip1 notice Sep 7 22:55:40 bigip1 notice /Common/traffic-group-1. Sep 7 22:55:40 bigip1 notice Sep 7 22:55:40 bigip1 notice to 0. Sep 7 22:55:40 bigip1 notice 0.

sod[4629]: 010c0044:5: Command: go standby GUI. sod[4629]: 010c0052:5: Standby for traffic group sod[4629]: 010c0018:5: Standby tmm1[10715]: 01340011:5: HA unit 1 state change: from 1 tmm[10715]: 01340011:5: HA unit 1 state change: from 1 to

In this scenario we can see that the failover was actually caused by a BIG-IP administrator sending the go standby command from the WebGUI. Keep this in mind for the exam, in order to determine if the failover was triggered manually or automatically through failsafe mechanisms.

VLAN Failsafe We have previously discussed the VLAN failsafe mechanism, which is where you configure the BIG-IP system to listen on specific VLANs to determine if the VLAN is healthy. If no traffic is being passed it will cause a failover. Here is how this looks in the log files:

Sep 7 23:05:55 bigip1 warning sod[4629]: 01140029:4: HA vlan_fs /Common/external fails action is failover. Sep 7 23:05:55 bigip1 notice sod[4629]: 010c0052:5: Standby for traffic group /Common/traffic-group-1. Sep 7 23:05:55 bigip1 notice sod[4629]: 010c0018:5: Standby Sep 7 23:05:55 bigip1 notice tmm1[10715]: 01340011:5: HA unit 1 state change: from 1 to 0. Sep 7 23:05:55 bigip1 notice tmm[10715]: 01340011:5: HA unit 1 state change: from 1 to 0. We can see that the HA vlan_fs mechanism triggers and reports that the VLAN named external fails and the action is to failover. The other log messages are the same as the other failovers.

Configuration Sync Whenever you synchronise the configuration from one BIG-IP to the other it will write to the LTM log file. There is not much presented so it is very easy to interpret. This is how it looks on the device which synchronises to the group:

Sep 9 15:01:26 bigip1 notice mcpd[7014]: 0107168c:5: Incremental sync complete: This system is updating the configuration on device group /Common/device-group-1 device %cmi-mcpd- peer-/Common/bigip2.f5lab.com from commit id { 5 6463762641387392476 /Common/bigip1.f5lab.com } to commit id { 8 6463762778292649062 /Common/bigip1.f5lab.com }.

641 641


You can see that the synchronisation is incremental (only changes) instead of a full sync. You can also see that the configuration is updating the device group with the name device-group-1 of which bigip2.f5lab.com is a member. You can also see that the commit ID is changed from 5 to 8. On the peer device, it looks like the following:

Sep 9 15:01:25 bigip2 notice mcpd[6982]: 010714a0:5: Sync of device group /Common/device-group-1 to commit id 8 6463762778292649062 /Common/bigip1.f5lab.com 0 from device complete. The information is pretty much the same except that this time bigip2 receives a sync of the device group, meaning it was not the one triggering it. You can see that it changes its commit ID to 8 and the configuration is coming from bigip1.f5lab.com. If you look closely you notice that the random numbers following the commit ID are the same, meaning it has the same configuration on both devices. If you log on to the CLI of one of the BIG-IP systems and run the command tmsh run /cm watch-devicegroupdevice you will be presented with the list of all device-groups and devices along with the commit IDs and the time of the last sync. An example of it is displayed in the following picture:

TMM Core Dump Sometimes the BIG-IP system will experience a major failure which causes a core dump. For those of you who are not familiar with the expression, it is the same as experiencing a bluescreen of death in Windows. A core dump is when the operating system dumps out everything that is in the memory, which will consist of the recorded state when the major failure was triggered. The good thing about this is that you will have the ability to determine what caused the failure. When the BIG-IP system experiences a failure of this size, it will usually restart all of the services. When this happens, the Fail-Safe system properties will trigger a failover if the BIG-IP system is configured in an HA pair. Nonetheless, it will cause an interruption of client traffic. Whenever a core dump is generated, the BIG-IP system will write to the log file /var/log/tmm. There is an example of this in the following log output.

642 642


Feb Feb Feb Feb Feb Feb Feb Feb Feb

2 2 2 2 2 2 2 2 2

16:45:07 16:45:07 16:45:07 16:45:07 16:45:07 16:45:07 16:45:07 16:45:07 16:45:07

local/f5a local/f5a local/f5a local/f5a local/f5a local/f5a local/f5a local/f5a local/f5a

notice notice notice notice notice notice notice notice notice

** SIGSEGV **(SIGFPE , SIGABRT ) fault addr: 0x512fe2 fault code: 0x1 fault time: Wed Feb 02 16:45:07 CET 2011 version: default TMM Version 10.1.0.3341.0 ticks since start of poll: 0 EAX=0 EBX=0x1dff844 E/CX=0x1610a08 EDX=0 ESI=0 EDI=0x1dff950 EBP=0x1dff838, ESP=0x1dff820 EIP=0x512fe2

The first line stating SIGSEGV is the first obvious sign of a major failure that has triggered a core dump. If you see this in the logs go check /var/core/ in order to see if a core dump has been created. A SIGSEGV message is short for Segmentation Fault, which is a failure condition that is raised by hardware with memory protection. The hardware informs the operating system that it has attempted to access a restricted area of the memory. In short, this is a memory access violation. Another important log entry is the fault addr and the fault code. Most likely you are not the first one to trigger this failure and providing F5 with the fault addr and fault code they will be able to search through their bug database and locate an existing ticket for it, perhaps even a workaround or solution. Failures like these often occur because of bugs in TMOS. All of the bugs receive an internal ticket ID at F5 which you can search in the release notes of each TMOS version. If it is solved, it will be added to the Fixed list.

Analytics Analytics is also known as Application Visibility and Reporting (AVR) and is a module upon which you can provision your BIG-IP system. Analytics gives you the ability to gather significant statistics of your applications which will help you analyze their current performance and identify current issues. Analytics can gather information such as transactions per second (TPS), server and client latency, request and response throughput and sessions. You also have the ability to view metrics from specific entities such as application, virtual servers, pool members, URLs and specific countries. It can present transaction counters for response codes, user agents, HTTP methods, countries and IP addresses which can give you a great view of what kind of traffic that is passing through your BIG-IP system. It has also the ability to capture traffic which you can examine and also to configure the BIG-IP system to send alerts whenever a particular problem arises. This is great for intermittent problems. Analytics can be configured to log all of its data locally but it can also be configured to send its collected data to a remote location. This is great for environments that have more than one BIG-IP system and if you would like to store all of the statistical information on a single syslog server or SIEM device such as SplunkÂŽ. Analytics is dependent on Adobe Flash Player. But as with the dashboard, we recommend you to install the web browser Google Chrome and allow flash to run for your BIG-IP WebGUI address.

643 643


Analytics Profiles In order to gather statistical information from an application you will have to create a capture filter which is known as an Analytics Profile. The Analytics profile contains all of the definitions of what circumstances are required in order for the data collection to start. When the Analytics profile is created you will assign this to one or more virtual servers that the application uses. It can also be applied to an iApps application service but each virtual server can only have one Analytics profile active at once.

In the Analytics profile you can configure the following properties: ▪ ▪ ▪ ▪

What statistics to collect Where to store the data (locally, remotely or both) If you should capture the traffic If you should send alarms

The BIG-IP system is delivered with a default Analytics profile called Analytics. As with all default profiles, the default Analytics profile is a minimal profile which only logs application statistics for server latency, throughput, response codes and methods to the local device. You can create your own custom Analytics profiles where you can add/remove the properties that you need for your application. When the data has been locally collected you can view the results under Statistics > Analytics. This screen will present all of the data that Analytics has collected independent of what profile actually gathered the data. This means that if you are running multiple Analytics profiles in your environment you will have to filter out some data in order to view the data that you are actually looking for.

644 644


How to Configure Analytics to Collect Data To configure Analytics to collect data on your BIG-IP system, please use the following instructions: Before you create your Analytics profile make sure that Application Visibility and Reporting (AVR) is provisioned on your BIG-IP system. If there is no Analytics tab in the Statistics section, then review your provisioning settings. You should also make sure that the virtual servers you would like to assign an Analytics profile to have a HTTP profile assigned and do not already have an Analytics profile running. Remember that you can only assign one Analytics profile per virtual server. 1. 2. 3. 4.

In the Navigation Pane go to Local Traffic > Profiles > Analytics Click Create. This will launch the New Analytics Profile page In the Profile Name field, enter a name for your Analytics profile. The name can only start with a letter and can only contain letters, numbers and underscores (_). In the Included Objects area, you specify which virtual servers Analytics should gather data from. a. For the Virtual Servers setting, click Add. This will launch a popup that lists all of the virtual servers that you can assign your Analytics profile. b. From the Select Virtual Server popup list, select the virtual servers which you would like to assign your Analytics profile to.

5.

6. 7.

Under the Statistics Logging Type setting, verify that Internal is selected. If it is not, select the checkbox to activate the setting and then select Internal. When you select Internal the BIG-IP system will store statistics locally and enable you to view the data under Statistics > Analytics. To the right of Statistics Gathering Configuration area, select the Custom check box. This will enable the area to be modified. Under Collected Metrics, select all of the statistics that you would like to gather. These are summarised in the following table:

Option Server Latency Page Load Time Throughput User Sessions

645 645

Description Collects the time it takes for the end-server to deliver its data to the BIG-IP system. This is selected by default. Collects how long it takes for a client to get a complete response back from the application. This will include completed page processing and network latency Collects HTTP request and response data. This is selected by default. Collects the number of unique user sessions.


8.

Under Collected Entities, select all of the entities that you would like to gather. All of these are summarised in the following table:

Option URLs Methods User Agent Response Codes Client IP Addresses Countries

9.

Description Collects the requested URLs Stores the HTTP methods in the request. This is selected by default. Stores information regarding the browsers that made the request Stores the response codes that was returned by the end-server. This is selected by default Stores the IP address from where the request originated. The address that is saved is dependent if the X-Forwarded-For header is enabled and Trust XFF is selected Stores the name of the country from which the request came from.

When you are done, click Finished. As mentioned earlier, it is possible to configure Analytics to send its data to a remote syslog solution or SIEM device. It is also possible to configure it to send alarms depending on the threshold you have set. However, this is beyond the scope of this book.

Reviewing and Examining the Application Statistics When you have configured your Analytics profile and assigned it to your virtual server you should let it run for a little while so that it will have some time to gather data. Once the Analytics profile has run for a while you can review the data by following below instruction: 1. 2.

In the Navigation Pane go to Statistics > Analytics. This will launch the Analytics screen which will display pie charts and application statistics. Under the Time Period list, you have the ability to select the time period of which you would like to view the statistics. You can choose from last hour, day, week or month.

646 646


3.

In order to present the exact data you would like, go to the menu bar and select the statistics you would like. All of the options are displayed below:

Option Transactions

Latency > Server Latency

Latency > Page Load Time

Throughput > Request Throughput Throughput > Response Throughput Sessions > New Sessions Sessions > Concurrent Sessions

647 647

Description Displays all Layer 7 transaction per second (tps) rate passing through the web application and the number of transactions going to and from the web application. Displays how long it takes (in ms) for the initial request to arrive at the virtual server until it receives a reply from the end-server. Displays how long it takes (in ms) for the client’s browser to send its request until it is fully loaded and presented to the client. Displays HTTP request throughput in bits per second (bps). Displays HTTP response throughput in bits per second (bps). Displays the amount of transactions that open new sessions (in seconds). Displays the total number of open and active session at a given time, until they time-out.


All of these settings depend on how you have configured your analytics profile. 4. 5. 6.

On the tab, select the entity (Applications, Virtual Servers, URLs) for which you would like to display statistics. To focus on specific statistics that you would like to review, click on the specific item. You can either click on the chart or the view details. The BIG-IP system will always display the full path of the view you are currently examining.

Investigating Server Latency To harness the full potential of Analytics you should learn how to filter out data and search exactly for the type of information you need in order to troubleshoot your application. In our example, we need to investigate the Server Latency for our application. To do so, we must first determine that our Analytics profile is configured to collect Server Latency and be assigned to the virtual server from which you would like to gather the data. Use the following instructions to filter and review this data: 1. 2. 3. 4. 5. 6. 7.

In the Navigation Pane go to Statistics > Analytics From the Time Period, select the time period for which you want to review. Under the Latency menu, select Server Latency. This will present you with a chart displaying the server latency for all virtual servers and applications associated with the Analytics Profile. To view the server latency for a particular application, select the application under the Details table. To view the server latency for a particular virtual server, click on the Virtual Servers tab. This will present the charts displaying latency for all virtual servers. In order to filter out one particular virtual server, click on the virtual server you would like to review under the Total Average Server Latency chart. If you need to filter out some more data, you can click on the other tabs on the Analytics screen to view charts displaying latency for other collected entities.

648 648


Investigating Page Load Times As with Server Latency, in order to log statistics regarding Page Load Times the Analytics profile needs to be configured to log the metric Page Load Time and that Analytics profile needs to be assigned to a Virtual Server. To view the gathered statistics please use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8.

In the Navigation Pane go to Statistics > Analytics From the Time Period, select the time period for which you want to review. Under the Latency menu, select Page Load Time. This will present you with a chart displaying the page load time (in ms) for all virtual servers and applications associated with the Analytics Profile. To view the page load time for a particular application, select the application under the Details table. To view the page load time for a particular virtual server, click on the Virtual Servers tab. This will present the charts displaying page load times for all virtual servers. Filter out one particular virtual server by clicking on the virtual server you would like to review under the Total Average Page Load Time chart. Restrict the time frame by clicking on a time in the left chart and dragging it to the right chart. You can also click on the other tabs on the Analytics screen to view charts that display page load times for other collected objects such as URLs, pool members or client IP addresses. When enabling Page Load Time as a metric, the BIG-IP system will inject a JavaScript into the pages in order to measure this. This can create serious issues when this JavaScript is inserted to a page that does not support it. Therefore, do some testing in a pre-production environment before fully implementing it.

Capturing Traffic using Analytics As mentioned earlier, Analytics can also be configured to capture the first 1000 transactions. This is very beneficial as you can see the requests and responses which can help you determine issues with latency, throughput or transactions per second. Analytics can then present you with a chart which is based on the data collected from this traffic. To see additional statistics you can clear the existing data and this will display additional statistics. To capture traffic using Analytics please use the following instructions: Before you implement the following instruction, make sure that you have provisioned Application Visibility and Reporting (AVR). 1. 2. 3. 4. 5.

In the Navigation Pane go to Local Traffic > Profiles > Analytics Click Create Under the Profile Name enter a name for your Analytics profile To the right of the General Configuration area, click the Custom check box in order to enable modification of the settings in the area. Under the Traffic Capturing Logging Type select one of the following: a. Internal – This will capture the data and store locally on the device. This enables you to view the data under Statistics: Captured Transactions screen. This is the default option.

649 649


External – This will capture the data and forward it to a remote logging server. When you select this option you will have to specify the Remote Server IP Address and Server Port number. Under the Included Objects area, specify from which virtual servers you would like to capture traffic. a. For the Virtual Servers setting, select Add and select the virtual servers from the popup list from which you would like to capture traffic. b. When you have chosen the virtual servers click Done. Under the Capture Filter area, from the Capture Request and Capture Responses select the data that you would like to capture. Choose from the following options: b.

6.

7.

Option None Header Body All 8.

9.

Description Do not capture any request or response traffic. Capture request or response traffic but header data only. Capture requests or response traffic but body data only. Capture all data for request or response traffic.

Depending on what traffic you would like capture, modify the filter settings to narrow down the portion of traffic. Remember Analytics will only capture 1000 transactions. It is therefore good to capture only requests or responses, specific status codes or methods or headers that contain specific strings. When you are done, click Finished.

Reviewing Captured Traffic To review the traffic on the BIG-IP system you must have first configured the system to actually store the data internally. Make sure you have done so by selecting Internal under Capturing Logging Type. Review the captured data by using the following instructions: If you are using version 11.0.0: 1. 2.

3. 4. 5.

6.

In the Navigation Pane go to Statistics > Captured Transactions. On this screen you will be able to see all of the captured data. To limit the output you can add Filter Settings. You do so by using the following instructions: a. Click Only. b. Click on the adjacent field. This will open up a list of items that can be used to filter the data. Some examples are, applications, virtual servers and pool members. c. Select the object of which you would like to examine. Under the Captured Traffic area, click on the transaction you would like to examine. The details of the transaction will be displayed on a screen below. Under General Details you will be able to review important troubleshooting details such as response codes or size of the response/request. If you need more information you can click on the Request or Response in order to view the actual content of the transaction. You might find some nice details within this area that might pinpoint the issue with the application. If the collected data does not contain the error you are searching for you can delete all captured transactions by clicking, Clear All. This will remove all transactions and the system will starts capturing up to 1000 transactions and display them. It takes around 10 seconds for a captured transaction to be displayed.

650 650


If you are using version 11.2.0 and above: 1.

In the Navigation Pane go to System > Logs > Captured Transactions. On this screen you will be able to see all of the captured data.

2. 3.

Optional: You can filter the amount of transactions by choosing the time period. Optional: You can also add an Advanced Filter by selecting the filter options like, application, virtual server, URL, response code and many more. In the Captured Traffic area, click any transaction that you want to examine. This will display additional details of the transaction along with the request and response.

4.

651 651


5. 6.

For more information, click Request or Response to view the contents of the actual transaction. Review the data for anything unexpected, and other details that will help with troubleshooting the application. On the Captured Transactions screen, clear all previously captured data records (including those not displayed on the screen) by clicking Clear All. After that you can start collecting transactions again if needed. The system captures up to 1000 transactions locally and displays them on the screen. Captured transactions are visible a few seconds after they occur.

Chapter Summary ▪

The BIG-IP system has a built-in dashboard located in the WebGUI which displays overall system performance and performance for specific modules. It will display the data graphically in line charts or in a throttle ranging from 0 to 100%. The information is gathered from the system every 3 seconds.

Depending on which modules you have provisioned, you will be able to select different views in the dashboard and present various types of data.

Troubleshooting is a huge part of the 201 exam and log files is one of the best sources to finding critical data that will either point you towards the right direction or explain the issue.

Analytics is also known as Application Visibility and Reporting (AVR) and is a module which you can provision on your BIG-IP system. Analytics gives you the ability to gather significant statistics from your applications which will help you analyze their current performance and identify current issues.

Analytics can also be configured to capture the first 1000 transactions. This is very beneficial as you can see the requests and responses which can help you determine issues with latency, throughput or transactions per second.

652 652


Chapter Review 1. Review the following log snippet. What has happened? Sep 7 21:16:16 bigip1 notice mcpd[6959]: 01070638:5: Pool /Common/http_pool member /Common/172.16.100.2:80 monitor status down. [ /Common/http: down; last error: /Common/http: Unable to connect; No successful responses received before deadline. @2017/09/07 21:16:16. ] [ was up for 0hr:0min:45sec ] a. b. c. d.

High Availability communication failure VLAN Failsafe has triggered, causing a failover to occur Configuration Sync was performed Health Monitor failure

2. Review the following log snippet. What has happened? Sep 7 23:05:55 bigip1 warning sod[4629]: 01140029:4: HA vlan_fs /Common/external fails action is failover. Sep 7 23:05:55 bigip1 notice sod[4629]: 010c0052:5: Standby for traffic group /Common/traffic-group-1. Sep 7 23:05:55 bigip1 notice sod[4629]: 010c0018:5: Standby Sep 7 23:05:55 bigip1 notice tmm1[10715]: 01340011:5: HA unit 1 state change: from 1 to 0. Sep 7 23:05:55 bigip1 notice tmm[10715]: 01340011:5: HA unit 1 state change: from 1 to 0. a. b. c. d.

High Availability communication failure VLAN Failsafe has triggered causing a failover to occur Configuration Sync was performed Health Monitor failure

3. In which log file are core dump messages logged? a. b. c. d.

/var/log/tmm /var/log/ltm /var/log/messages /var/log/pkfilter

4. What profile do you need to assign to a virtual server in order to collect data for the Application Visibility and Reporting (AVR) module? a. b. c. d.

AVR Profile Analytics Profile Packet Filter Profile Capture Profile

653 653


Chapter Review: Answers 1. Review the following log snippet. What has happened? Sep 7 21:16:16 bigip1 notice mcpd[6959]: 01070638:5: Pool /Common/http_pool member /Common/172.16.100.2:80 monitor status down. [ /Common/http: down; last error: /Common/http: Unable to connect; No successful responses received before deadline. @2017/09/07 21:16:16. ] [ was up for 0hr:0min:45sec ] a. b. c. d.

High Availability communication failure VLAN Failsafe has triggered causing a failover to occur Configuration Sync was performed Health Monitor failure

The correct answer is: d The monitor for 172.16.100.2:80 has received the status of down because it has not received any successful response before the deadline (timeout). 2. Review the following log snippet. What has happened? Sep 7 23:05:55 bigip1 warning sod[4629]: 01140029:4: HA vlan_fs /Common/external fails action is failover. Sep 7 23:05:55 bigip1 notice sod[4629]: 010c0052:5: Standby for traffic group /Common/traffic-group-1. Sep 7 23:05:55 bigip1 notice sod[4629]: 010c0018:5: Standby Sep 7 23:05:55 bigip1 notice tmm1[10715]: 01340011:5: HA unit 1 state change: from 1 to 0. Sep 7 23:05:55 bigip1 notice tmm[10715]: 01340011:5: HA unit 1 state change: from 1 to 0. a. b. c. d.

High Availability communication failure VLAN Failsafe has triggered, causing a failover to occur Configuration Sync was performed Health Monitor failure

The correct answer is: b Based upon the first part of the log entry, we can see that the HA vlan_fs for the /Common/external VLAN has failed and this has triggered a failover. 3. In which log file are core dump messages logged? a. b. c. d.

/var/log/tmm /var/log/ltm /var/log/messages /var/log/pkfilter

The correct answer is: a Whenever a core dump is generated, the BIG-IP system will write to the log file /var/log/tmm.

654 654


4. What profile do you need to assign to a virtual server in order to collect data for the Application Visibility and Reporting (AVR) module? a. b. c. d.

AVR Profile Analytics Profile Packet Filter Profile Capture Profile

The correct answer is: b In order to gather statistical information from an application you will have to create a capture filter which is known as an Analytics Profile. The Analytics profile contains all of the definitions of what circumstances are required to happen in order for the data collection to start. When the Analytics profile is created you will assign this to one or more virtual servers that the application uses.

655 655


22. Device Maintenance In this chapter, we discuss some of the necessary maintenance tasks that a BIG-IP administrator need to perform. This includes configuration backups, configuration restoration and performing software upgrades. We also discuss the tools available in order to make these maintenance tasks easier,both in a time efficiency context but also in workflow context, where you can perform these tasks using one single graphical user interface.

Archive Files There are two files that can be created on the BIG-IP that can be used to backup the system. They are two completely different files with two specific purposes. We’ll cover both of them in great detail in the following sections.

The Single Config File (SCF) The two main purposes of the Single Config File (SCF) are; to replicate the configuration across multiple BIG-IP devices or migrating the configuration from one device to another. An example of this would be from the Virtual Edition to a hardware appliance. The SCF file is a flat text file that contains all the output from all of the different tmsh commands that has been utilised on the BIG-IP system, containing all of their values and attributes. Therefore, they are very efficient when you want an exact copy of a configuration and then transfer it to another BIG-IP system. Whenever you want to create an SCF file, the BIG-IP system prompts the tmsh utility to gather together all of the tmsh commands, values and attributes that currently exist in the running configuration. Once the tmsh utility is complete, it will save all of the commands to a text file in the /var/local/scf directory with the name you have specified in the command. The file will be saved with the extension.scf. In order to create an SCF file without password protection, issue the following command:

# tmsh save /sys config file [filename] no-passphrase For instance, issuing the following command would produce the following result:

# tmsh save /sys config file ltm01.scf no-passphrase Saving running configuration... /var/local/scf/ltm01.scf /var/local/scf/ltm01.scf.tar Whenever you install an SCF file, the BIG-IP system will first create a backup of the currently running configuration to /var/local/scf/backup.scf. Once that is complete it will start to load the configuration specified in the SCF file into the running configuration. To use the previous example, if you would like to install ltm01.scf on bigip01 using the command load /sys config file ltm01.scf, the BIG-IP system would first create a new SCF file in the directory /var/local/scf/backup.scf and then load ltm01.scf into the running configuration.

656 656


Example of Data Contained in a SCF file

The User Configuration Set (UCS) Archive A User Configuration Set (UCS) archive saves all BIG-IP configuration files within one single zipped tar file. The UCS files are saved into the directory /var/local/ucs using the extension .ucs. You can create UCS archives using the WebGUI and when using this, all of the UCS archives will be saved into the default directory. When using the tmsh to save the UCS archive you have the ability to specify where to save the file. The only issue with this is that you will not be able to view the UCS archives in the WebGUI unless they are located in the /var/local/ucs directory. The great thing about the UCS archive is that they do not only contain the configuration, they also contain the BIG-IP license, user accounts with their passwords, DNS zone files and the ZoneRunner configuration. By default they also contain the SSL certificates along with their private keys. However, the private keys can be excluded if needed. You also have the ability to encrypt the UCS archive in order to further enhance the security. To summarise the UCS contains the following files: ▪ ▪ ▪ ▪ ▪

All BIG-IP configuration files The BIG-IP license User accounts with their passwords DNS zone files and the ZoneRunner configuration SSL certificates along with their private keys

Since the UCS archive contains all of these files, it is more suited to work as a backup or restoration of a replaced BIGIP system rather than for replication or baseline purposes. The UCS archive can be stored on an off-site location and can be used as a disaster recovery if your BIG-IP system would completely fail.

657 657


In order to secure the UCS archive, they should always be moved to a remote file server in order to prevent them from being lost, in case the BIG-IP system breaks down or if you are unable to access it from the BIG-IP system. And since they contain sensitive information, make sure the remote location is secure. Whenever you perform any major events on your BIG-IP system such as upgrading, installing a hotfix or performing a complicated configuration change, you should always generate a UCS archive and backup the system. The default settings when generating an UCS archive will give you everything you need in order to restore your BIG-IP system into the state it once was when you generated the UCS archive. This is a great fallback plan in case you need to restore things to where they were. In order to generate an UCS archive, you can either do it through the WebGUI or the tmsh using the following instructions:

Generating a UCS Archive - WebGUI 1. 2. 3. 4. 5. 6.

In the Navigation Pane go to System > Archives Click Create In the Name Field, enter the name of your UCS archive. If you would like to encrypt your UCS archive, choose Enabled under Encryption a. Enter a passphrase for the UCS archive Under Private Keys you will have the option to include the private keys associated with the certificates that are installed on the BIG-IP system. When you have entered all of your values, press Finished to generate the UCS archive.

Loading a UCS Archive – WebGUI 1. 2. 3. 4.

In the Navigation Pane go to System > Archives If the UCS archive is not already present on the BIG-IP system, then upload it by clicking Upload. a. Browse to the UCS archive and click Upload. Click on the UCS archive you would like to restore. Click on Restore.

658 658


Generating a UCS Archive – tmsh In order to generate a UCS archive through tmsh, please issue the following command:

# tmsh save /sys ucs [directory/filename] You can add the following parameters to the command: no-private-key – This will save the UCS archive without the private keys passphrase – This encrypts the UCS archive and lets you specify a password to unencrypt it.

Loading a UCS Archive – tmsh In order to load a UCS archive through tmsh, please issue the following command:

# tmsh load /sys ucs [directory/filename]

F5 recommends that you create a UCS archive that contains the hostname of the BIG-IP system in order to easily associate the UCS archive with that particular BIG-IP system. If you however do decide to store the UCS archives locally you can use the tool crontab in order to schedule whenever a UCS should be generated. You can also use the utility logrotate in order to rotate the UCS archives in order to save up disk space. Configuring these tools is beyond the scope of this exam.

Customising What Files Are Included in the UCS Archive Like we mentioned in the LCD Warnings section, you have the ability to specify which files to be included in a UCS archive. The files included are defined in the file /usr/libdata/configsync/cs.dat. In order to add custom files into the UCS archive please use the following instructions:

659 659


1. 2.

Log on to the CLI of the BIG-IP system. Backup the existing cs.dat file in order to keep the original by issuing the following command:

cp /usr/libdata/configsync/cs.dat /usr/libdata/configsync/cs.dat.original 3.

By default, the /usr file system is mounted in read-only mode. Before editing the cs.dat file we need to remount /usr as read-write. To do this, issue the following command:

mount –o remount,rw /usr 4.

Using a text editor, modify the cs.dat file

vi /usr/libdata/configsync/cs.dat 5.

At the end of the file, add the following entries:

“#Custom UCS keys save.[number].file = [Custom File] save.[number].file = /usr/libdata/configsync/cs.dat.original”

Replace [number] with a higher number than the last key being used. You can add as many entries as you need. Since we have modified the cs.dat file you should also include the cs.dat.original file in the UCS archive in order save the original one as well.

6.

Save the cs.dat file by hitting the ESC button and then typing the following:

!wq 7.

Remount the /usr file system as read-only by performing the following command:

mount –o remount,ro /usr 8.

Now both the custom file and the cs.dat file should be included in the UCS archive.

The Differences Between UCS and SCF The UCS and SCF are not configuration files like the bigip.conf or the bigip_base.conf. For example, like we previously mentioned, an SCF file is useful when you would like to transfer the configuration of one BIG-IP system to another. Since the SCF tells the tmsh utility to gather the output from all the commands that has built up the runningconfiguration, it will act as a clone of the configuration which can be transferred to a new BIG-IP system. The SCF file is not device dependent and does not contain any licensing or specific objects that might be unique on a device. UCS archives on the other hand do contain this information which makes them more optimal for backup of an existing BIG-IP that can be used for disaster recovery. However, using an UCS archive to restore a faulty BIG-IP that has been replaced is not a straightforward task which we’ll cover later in this chapter.

660 660


Restoring a BIG-IP System From a UCS Archive In the BIG-IP versions prior to BIG-IP v11.x, when you restore a BIG-IP system using a UCS archive it is very important to name the unit the same hostname as the one existing in the UCS archive. The reason for this is that the earlier versions of BIG-IP would only perform a partial restore if the hostname on the system did not match the UCS archive. The difference between a full and partial restore is the following: ▪

Full Restore – All configuration settings are restored which includes self-IP addresses, VLANs etc. It will restore the configuration stored in both bigip.conf and bigip_base.conf files.

Partial Restore – Will only restore the shared configuration. In other words, virtual servers, pools, profile for instance. It will only restore the configuration stored in the bigip.conf file.

This is important to remember when working with BIG-IP systems running v9.x or version v10.x. Starting from BIG-IP v11.x and up you will not have to take this into consideration as the UCS archive restoration will always be a full restore. Regarding versions, F5 recommends that you always run the same version as the UCS archive being restored on the system. It is however, possible to restore for instance a BIG-IP v10.x UCS archive onto a system running version BIG-IP v11.x.

Licensing Considerations When Restoring From a UCS Archive When restoring a UCS archive on a BIG-IP system, it is very important to consider how you handle the licensing. Since the UCS archive also contains the licensing file, when the UCS archive is restored, by default it will also restore the licensing file. Like we mentioned in the earlier chapters of this book, the BIG-IP license is associated with the specific hardware on which the dossier is generated. This means that when restoring the UCS archive on a system that has another serial number, the BIG-IP license will not match which causes problems. In order to successfully install a UCS archive you must perform one of the following actions: ▪ ▪ ▪ ▪ ▪

Restore the UCS archive to the same system from which is was first generated. Relicense the BIG-IP system after restoring the UCS archive. Installing the UCS archive without the license file using the tmsh command: tmsh load /sys ucs no-license. This command is very useful during an RMA process. You will receive the full configuration but no license file. Reassociate the license with the new system. Contact F5 Technical Support and they will be able to associate the license with the new serial number. Save the license file (bigip.license) before you install the UCS archive. When the restore is complete, move the license file back to its original directory.

When you restore an UCS archive that contains a different license than what is supposed to run on the system, make sure that the license contained in the UCS archive has the same functionalities and modules licensed (GTM, LTM, ASM etc.) If you need assistance with this, you can always contact F5 Technical Support.

Other Considerations When Restoring From a UCS Archive In regards to versions and licensing, there are a few specific scenarios that are good to know when restoring from a UCS archive that was not generated on the system being restored.

661 661


When you restore a BIG-IP system licensed with the BIG-IP DNS (GTM) module, the server, DNS and GTM certificates and keys will not be the same. This will cause the encrypted communication which is used for configuration synchronisation, to fail. In order to solve this, the certificate exchange process will have to be run again on the BIG-IP system that is being restored. Also, when you have restored a BIG-IP DNS (GTM) system using a UCS archive, it is a good idea to temporarily turn off synchronisation. The reason for this is that if you restore a BIG-IP DNS system using a UCS archive, it will synchronise its configuration once it is restored. This might overwrite the current configuration with old configuration which might not be the wanted result. In order to temporarily turn off synchronisation on a BIG-IP DNS system, please use the following instructions:

Preventing Synchronisation When Installing a UCS Archive on a BIG-IP DNS (GTM) system BIG-IP v11.5.x and later 1. 2. 3. 4. 5. 6. 7. 8. 9.

Physically disconnect the BIG-IP system’s TMM switch port interface from the network. Do not disconnect the system’s management interface! Log on to the WebGUI using the management interface Restore the system using the UCS Archive After the UCS installation is complete go to DNS > Settings > GSLB > General Clear the Synchronize check box. Click Update to save the change. Reconnect the BIG-IP system’s TMM switch interface to the network. Log on the CLI of the BIG-IP system In order to add the BIG-IP DNS (GTM) system to the synchronisation group, please run the following command:

# gtm_add [IP address of a member of the target BIG-IP DNS (GTM) synchronisation group] BIG-IP v11.0.0 – 11.4.1 1. 2. 3. 4. 5. 6. 7. 8. 9.

Physically disconnect the BIG-IP system’s TMM switch port interface from the network. Do not disconnect the system’s management interface! Log on to the WebGUI using the management interface Restore the system using the UCS Archive After the UCS installation is complete go to System > Configuration > Global Traffic > General Clear the Synchronize check box. Click Update to save the change. Reconnect the BIG-IP system’s TMM switch interface to the network. Log on the CLI of the BIG-IP system In order to add the BIG-IP DNS (GTM) system to the synchronisation group, please run the following command:

# gtm_add [IP address of a member of the target BIG-IP DNS (GTM) synchronisation group]

662 662


Delayed Load on BIG-IP ASM Module Another BIG-IP module that has a specific exception is the ASM module. When you install an UCS archive containing ASM configuration you need to be sure that the system that is installing the archive actually has ASM provisioned. Otherwise the BIG-IP system will postpone the installation of the ASM configuration (delayed load) which causes the BIG-IP system to set the wrong permissions on the ASM configuration files. This ultimately causes the MySQL database to fail.

vCMP Considerations When Restoring From a UCS Archive We have not discussed vCMP to a great extent in this book as it is a more stand-alone subject. However, when restoring a UCS archive on a BIG-IP system running vCMP it is good to know the following. On a vCMP system there are two UCS archives available, the vCMP Host UCS archive and the vCMP Guest UCS archive. The vCMP Host UCS archive will only restore the vCMP host with its required configuration. It will not restore any of the vCMP guests’ virtual disks. It will only attempt to restore the vCMP guests to a base state by performing the vCMP guest provisioning, installation and deployment. Once the vCMP guest has been restored to a base state you will be able to restore the guest system by installing the vCMP Guest UCS archive which contains all of the configuration files, along with the other files that you specified during the creation of the archive.

Preventing Service Interruptions When Replacing a BIG-IP System in a Redundant Pair In order to prevent service being interrupted when replacing a BIG-IP system in a redundant pair, please use the following instruction. If you are running version BIG-IP v9.x – v11.x then before utilising the following instruction, turn off Connection Mirroring. 1. 2.

3. 4. 5.

6. 7. 8.

9.

Log on to the BIG-IP system using CLI and the root credentials. Power down the failed BIG-IP system using the following command in the Linux shell: shutdown –h –t time now. This will cause the BIG-IP system to go into halt mode so that you can safely turn off the power without causing any damage to the device. Disconnect all of the cables on the faulty BIG-IP system and remove it. Install the new BIG-IP system. If the BIG-IP systems were using serial cable for failover, reconnect the serial cable to both BIG-IP systems. Make sure you connect it properly in order to prevent accidental failovers which might cause traffic disruptions. Power on the BIG-IP system. Do not attach the network cables yet! Configure the management interface using either the LCD screen, the serial connection or the management port using either SSH or the WebGUI (Initial Setup using the standard IP address 192.168.1.245). Restore the configuration using the UCS archive from the failed BIG-IP system. Since this is a replacement unit, take the licensing into consideration and use one of the methods mentioned previously in this chapter. When the UCS archive has finished restoring the configuration and you have verified that it has been loaded without errors, depending on your setup, perform one of the following steps:

663 663


a.

If you are running your redundant pair with the serial cable failover, reattach all of the network cables. b. If you are not running your redundant pair with the serial cable failover, power down the replacement unit, connect all of the network cables and then power up the replacement unit. This is a proactive measure in order to lower the risk of IP conflicts until network failover traffic has been restored between the two units. 10. Situational: If you were running BIG-IP version v9.x – v11.x then re-activate connection mirroring.

Managing Software Images and Upgrades Legacy Version Numbering Schema In TMOS v11.x (excluding 11.5.x) the Legacy Version Numbering Schema is used. With this software release plan, F5 frequently releases new versions of the BIG-IP software and they are divided into different categories. Depending on the category the impact of the upgrade might be different which makes it easier to motivate the upgrade.

Major Software Versions The major software versions are the base versions of the BIG-IP system. For instance, v.10.0.0, v.11.0.0 and v.12.0.0. These versions contain major new features which can significantly alter how the BIG-IP system operates. For instance, v.11.0.0.0 introduced Device Service Clustering, iApps, Analytics among other features. When upgrading to a major version the BIG-IP system might experience issues since it will introduce major features and alter how the system operates. But is worth noting that this is very different from system to system and what features you are currently running on your BIG-IP system.

Minor Software Versions The minor software versions contain all cumulative hotfixes and maintenance versions since the major version. It also introduces minor features and functionalities. For example, the version v.11.4.0 introduced local traffic policies and flexible resource allocations of vCMP systems to mention some. The minor versions uses the following standard v.11.2.0, v.11.3.0 and v.11.4.0. When upgrading your BIG-IP system to a minor software version it has slightly less risk than a major software version but it still a risk.

Maintenance Software Versions The maintenance software versions contain all cumulative hotfixes since the previous minor version along with some minor new functionalities. The maintenance software versions uses the following standard, v.11.2.1, v.11.2.2 and v.11.2.3. Some of the functionalities that have been introduced in a maintenance software version are vCMP support for solid-state drives and support for the VIPRION B2150 blade which was introduced in the version v.11.4.1. The further down we go into the different categories the less risk the upgrade has. Again, the upgrade might affect the BIG-IP system negatively and cause problems, but it really depends on what features you are running on your BIG-IP system.

Cumulative Hotfixes The cumulative hotfixes contain software and bug fixes between minor and maintenance software versions. The risk of upgrading a BIG-IP system with a hotfix are very low but again, the risk is always present when upgrading a system since you alter the way it is currently operating. The cumulative hotfixes use the following standard HF1, HF2, HF3. For instance the system will present the following version, BIG-IP v.11.6.0 HF5. This means that the system is running version v.11.6.0 with Hotfix 5.

664 664


Sometimes F5 also releases engineering hotfixes (similar to patches). However, these will only be obtainable through a F5 Support case. These engineering hotfixes is not necessarily added to the cumulative hotfix packages.

The Tick Tock Release Cycle Since TMOS v.12x (including v11.5.x) F5 has changed their software release plan into what they called the Tick Tock Release Cycle. The Tick Tock Release Cycle includes the following categories: ▪ ▪ ▪

Tick (x.0.0) – Major architectural changes to TMOS Tock (x.1.0) – Enhancements supporting Ticks – Starts the Long-Term Stability Releases x.1.1 to x.1.n – Maintenance Releases. For instance 12.1.1.

For the Tick Releases (Major Releases) (x.0.0) the Standard Support phase begins on the first customer ship date and extends to the first maintenance release of the subsequent Long-Term Stability release (x.1.1) or after 15 months. Whenever this happens the Major Release reaches its End of Software Development (EoSD) phase. To use one example. The Standard Support phase for BIG-IP 12.0.0 will either end within 15 months or when 12.1.1 is released. For the Tock Releases (x.1.0), each new release marks the start of a Long-Term Stability Release. The initial release (x.1.0) will contain new features, stability improvements and new hardware support. The subsequent releases (x.1.1 to x.1.n) are called maintenance releases which contains security fixes, diagnostic and supportability improvements, new hardware support and addresses product defects. Software Development activities will only continue on the latest maintenance release of each Long-Term Stability Release as long as it is within the Standard Support phase. The Standard Support phase for the Long-Term Stability Release begins from the first customer ship date of the x.1.0 release and will extend for five years.

665 665


Once the Standard Support phase expires the release will enter its End of Software Development (EoSD) which means it will no longer be further developed. However, these releases will still be supported by F5 support until they reach their End of Technical Support (EoTS) date. If F5 has not declared an exception, this date will be announced one year after the End of Software Development (EoSD). Even though the release cycle has changed and F5 focuses on delivering more long-term stability releases, hotfixes will still be released in order to correct bugs in the BIG-IP systems.

Release Notes For every new software version (no matter what category) F5 will publish release notes. The release notes include the following information: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Assists the BIG-IP administrator to determine if the release is applicable and/or appropriate for their environment. Links to the user documentation for the release. Lists all of the new features and functionalities contained in the version. These features can also exist in prior versions which have been rolled into this version as a result of incorporating prior versions. Lists all issues that this version will correct. Contains an installation checklist which includes the steps required to upgrade from earlier versions. Information on how to install the software. A list of the post-installation tasks that should be executed prior to the upgrade. Lists all behavior changes and known issues existing in the version. How to contact F5 support in case you need additional resources or technical support.

It is very important to review the release notes before you upgrade a BIG-IP system. As mentioned above, the release notes contain a list of all the new features and the known issues in the version. This can give you an indication of what problems might arise when the upgrade is complete. Also, the post-installation tasks and upgrade path might be different depending on what version you are planning on upgrading to. For instance, if you are currently running version v.9.0.0 and planning on installing v.11.0.0, you actually have to first upgrade your device to v.10.0.0 before proceeding with upgrading to version v.11.0.0.

666 666


There is also another aspect of upgrading your environment which F5 cannot make recommendations on and that is your internal routines that dictate how an upgrade should be performed. These routines are usually set up by a Change Management team and they can look very different from organisation to organisation. For instance, some organisations requires a risk assessment every time you perform an upgrade which determines the impact of the upgrade and what to do in case it is not going as planned. Internal routines are something that every organisation needs to set up on their own and the standard Information Technology Infrastructure Library (ITIL) is a good way to start.

Overview of the Disk Management Process Since BIG-IP v11.x, the system uses the logical volume management (LVM) disk-formatting scheme. This is an enhanced software image manager that deploys logical rather than physical storage. The logical volumes may span over multiple hard drives and can be extended without interruptions. One of your objectives as a BIG-IP administrator is to monitor the disk capacity and make sure it does not get too low on space. Whenever the BIG-IP systems runs low on disk space you may experience one or several of the following symptoms: ▪ ▪ ▪ ▪ ▪

The WebGUI may be inaccessible. Upgrades or hotfix installations will fail to complete. The daemon log messages will indicate failed write operations and may have a problem opening certain files. The system may be unable to generate and save any new UCS archives. Performance degradation.

In order to prevent the disk from becoming low on space, F5 has compiled the following guidelines: ▪ ▪

▪ ▪

Store long-term maintenance files such as UCS archives or SCF files on an off-site storage such as a network share. When creating and generating files such as a tcpdump (packet capture), UCS archives and qkview files, store the files under a full path location rather than the directory you are currently located at. The most optimal location is /shared/tmp. Verify the current disk space on a periodic basis by utilising the command df –h. This can also be performed remotely using a monitoring software and SNMP. Periodically clean out old files that are no longer of use. This could be old tcpdumps (packet captures), qkviews or software images.

The BIG-IP Hard Disk and Boot Locations Every BIG-IP system will have at least one hard drive installed. The size and amounts are determined on the model. For instance, the BIG-IP 4200 comes with a 500GB hard drive while the BIG-IP 10000 has two 1TB hard drives installed that are configured in a RAID1. As with other specifications, a bigger box usually results in better specifications. The BIG-IP virtual editions have virtual hard drives that store data. The hard drives store both the BIG-IP software and the associated files which the software uses. This includes the configuration files, certificates and more.

667 667


Software Images F5 distributes their software images in ISO archive files which has the format extension *.iso. The hard drive has specific space where Software Images should be stored by default and this is the /shared/images location. Every time you add an image to the BIG-IP system using the WebGUI it will place the image in /shared/images. If you do not add the software images to this location they will not be visible through the WebGUI and tmsh will not be able to find the image when trying to install it. Software images have three different states. These are described in the following list: ▪

Available Images – The available images state indicates that the software image has been successfully imported to the BIG-IP system. Either by uploading it through the WebGUI or by transferring it to the BIG-IP system and placing the image in the /shared/images directory. These images are available for installation.

Installed Images – The installed images are the images that have been installed on the BIG-IP system’s boot locations on the hard drive. The number of boot locations a BIG-IP system can create mainly depends on how large the hard drive is.

Active Image – The active image is the software image that the BIG-IP system currently has booted up on. Only one software image can be active at a time and the only exception to this is if vCMP is used on the BIGIP system.

The boot location names have the format HDx.y where x is the hard drive number and y is the boot location number. You can view how much disk space all of the boot locations take up by going to System > Disk Management. The figure below displays the same hard drive as the previous figure containing the Image List.

You can also view all of the boot locations by going to System > Software Management > Boot Locations. This is displayed in the following figure which comes from the same system as the previous examples.

668 668


HD1.1 – This boot location is currently inactive and is not the default boot-up volume. It has the software image BIG-IP v11.6.0 (build 0.0.401)

HD1.2 – This boot location is currently active and the default boot-up volume. This is indicated by the Status Active and the Default Yes. This means that when the BIG-IP system is rebooted it will start up boot location HD1.2. It has the software image BIG-IP v12.0.0 (build 0.0.606).

In order to run your BIG-IP system you will only need to have one boot location with an active software image. However, whenever you install a new software image or hotfix, you need to do this from an active boot location and specify a non-active boot location. This means that in order to install new software images on a BIG-IP system you will need to have at least two boot locations. The great thing about this is that you can install the new software image on a non-active boot location and reboot the BIG-IP system into that boot location to test and verify the installation. If something in the new version has caused a major issue with your environment, you can just reboot the system into the old boot location and you are back to the previous state. This is very beneficial. Whenever you install a new software image or hotfix onto a new boot location the configuration and license will be automatically transferred to the new boot location.

How to Install a New Software Image In the following sections we’ll cover all of the steps necessary in order to install a new software image. The complete software installation process is the following: 1. 2. 3. 4. 5. 6. 7. 8.

Determine what software image and hotfix you would like to install. Download the software image including the MD5 hash sum and release notes. Go through the release notes and fulfill the appropriate installation pre-requisites. Import the software image to the BIG-IP system using either the WebGUI or transferring them to /shared/images using a file transfer program. Perform an md5 checksum on the Image file once it has been transferred to the BIG-IP system. This is to determine that the file is in 100% health and not corrupted. Re-active the license prior to the upgrade – so that the Service Check date of the bigip.license file is within the BIG-IP version date. Install the Software Image on an inactive boot location. Activate the new software image’s boot location and reboot the device. Confirm that everything is working as expected.

669 669


Determine the Software Image to Install This really depends on what features your BIG-IP system is using and what platform you are running. There are numerous articles available at AskF5.com that can assist with this. F5 has published a hardware/software compatibility matrix that displays what versions each hardware platform can use. This is published through the solution article: K9476: The F5 hardware/software compatibility matrix. You can also check out the latest hotfix version for each BIG-IP release which is published through the solution article: K9502: BIG-IP hotfix and point release matrix. Like we previously mentioned in this chapter, the release notes are a great way to determine if that particular version is a good choice for your BIG-IP system. Another solution article worth reading is the K5903: BIG-IP software support policy which contains the new software release plan from F5. F5’s own best-practices states that the most stable version (using the Tick Tock Release Cycle) is the latest Long Term Stability Release with the latest Maintenance Release. This information is contained within this solution article.

Downloading the Software Images/Hotfixes You can download the Software Images and Hotfixes from https://downloads.f5.com. You will need to have a valid support account in order to log in. It is recommended that you download both the ISO file (.iso) that contains the actual image and the md5 file (.md5). The MD5 file is used to check the integrity of the ISO once it has been uploaded to the BIG-IP system. In the following picture we display how and where you can download the software from F5:

670 670


How to Import the Software Images/Hotfixes to the BIG-IP system. There are two ways you can import the Software Image to the BIG-IP system. The first one is through the WebGUI using the Software Management page. The Software Management component will automatically store the image under /shared/images/. In the following instructions the complete procedure is explained: 1. 2.

3. 4. 5.

Log on into the BIG-IP system using the WebGUI and navigate to System > Software Management Choose your scenario: a. If you are installing a new software image, click the Images List. b. If you are installing a hotfix, click Hotfix List. Once the correct page has been loaded, click Import then Browse. Navigate to the *.ISO file. Select the *.ISO file and then click Import to copy the file to the BIG-IP system. Do the same with the MD5 checksum file. When importing the Image to the BIG-IP system using the WebGUI, do not leave the Import screen during the transfer because this will cause the transfer to stop. Remain at the import screen until the import is completely finished. In order to install a Hotfix you will need to have the base image of that version available on the system. Otherwise the installation of the Hotfix will never start and the installations status will state “waiting for image�.

The second way you can import the Software Image is through transferring the file using SFTP (FTP over SSH) or SCP. There are multiple applications available to assist you with this which are also free of charge. Once you are connected to the BIG-IP system, browse to the folder /shared/images/ and add the image to that location. Once the transfer is complete you should be able to view the image in the WebGUI as well. The csyncd process is the one responsible for populating software image table. This process needs to finish its work before the software images appear in the WebGUI or the CLI.

Checking the MD5 Checksum of an Image File When F5 releases software images or hotfixes they run the files through an MD5 algorithm in order to get series of letters and numbers. This is a unique ID of that file and in order to confirm that it is not corrupted after you have uploaded it to your BIG-IP system you run the following command in CLI:

# md5sum [filename] The end result should match the value specified in the MD5 file. This is displayed in the following example:

671 671


As you can see, the values in the MD5 file matches the result we received on the BIG-IP system. This means that the file located on the BIG-IP system is not corrupted. You can also upload the md5 file [imagename].md5 to the /shared/images/ location and run the command:

# md5sum –check [imagename].md5 It will prompt the following results:

# md5sum –check BIGIP-12.0.0.0.0.606.iso.md5 BIGIP-12.0.0.0.0.606.iso: OK If the results are not OK then the file is corrupted and you should re-upload the image to the BIG-IP system.

Re-activate the License Prior to the Upgrade In the file /config/bigip.license there is a line called the Service Check Date. This date is the same as when you previously licensed your BIG-IP system or when your service contract for the device expires. For example, if you have a service contract which ends November 31 and you licensed your device in August 31 your Service Check Date will be August 31. For specific BIG-IP versions you will need to have a Service Check Date that is either the same or later than the License Check Date. Some examples are: Product/Version BIG-IP 12.0.0 BIG-IP 11.6.0 BIG-IP 11.5.4

License Check Date 2015-08-03 2014-08-05 2013-12-05

Whenever a BIG-IP device boots into a specific version, the Service Check Date is compared to the License Check Date for that particular BIG-IP version. If the Service Check Date is older than the License Check Date the system initialises but the configuration is not loaded. In order to load the configuration the Service Check Date needs to be updated.

672 672


F5 does this to ensure that you need an active support contract in order to keep your BIG-IP system running the latest release. In order to determine what Service Check Date you presently have, use the following instructions: 1. 2.

Log on to the BIG-IP system using the CLI Navigate to the directory /config by typing the command:

cd /config 3.

Filter out the Service Check Value from the bigip.license file by typing the following command:

grep “Service check date” bigip.license The output will look like the following:

Service check date : 20150920 4.

Then reference the License Check Date from the AskF5 article: K7727: License activation may be required prior to a software upgrade for the BIG-IP or Enterprise Manager system.

In order to prevent any errors, simply re-activate the license by using the instructions we presented in the earlier sections of this book before you upgrade your BIG-IP system. Whenever you re-activate the license it will reload the current configuration meaning that you will experience an outage. When the BIG-IP system is configured in an High-Availability pair this will not trigger a failover event. Meaning that even though you have configured High-Availability, if you’re performing this activity on the active device you will experience an outage.

Installing the Software Image When you have determined the Software Image you want to install and you have verified the MD5 checksum of the file, then you are ready to install it on the BIG-IP system. In order to install the Software Image or the Hotfix, you will need to have a non-active volume. If you do not have a non-active volume, you will have to create one. You also need sufficient disk space on the BIG-IP system for the image in its uncompressed format.

Installation Using the WebGUI In order to install the Software Image using the WebGUI please use the following instructions: 1. 2. 3. 4.

Log on to the WebGUI and navigate to System > Software Management Then navigate to either Image List or Hotfix List depending if you are installing a new software image or hotfix Select the Software Image/Hotfix you would like to install by clicking the checkbox to the left of the image. Once the Software Image/Hotfix is selected, click Install. This will prompt the Install Software Image window.

673 673


5.

6.

Select an available hard disk from the Select Disk pull-down menu and then select an available volume. You will only be able to select a non-active volume. If you do not have any available volumes, simply type a nonused volume number and the BIG-IP system will create it. Click Install in order to start the installation.

Once the installation has been started a progress bar will appear in the Image List. The installation may take several minutes to complete and once it is complete it will result in the install status “complete�.

You can shortcut the installation of a Hotfix by going directly to the Hotfix installation. It will pick up the base ISO but you will need to manually enter the slot ID as it is not presented in the WebGUI.

Installation Using tmsh To install a software image or hotfix using tmsh, use the following commands:

When Installing a Software Image (tmos) # install /sys software image [image_name].iso volume [volume_name] For example:

(tmos) # install /sys software image BIGIP-12.0.0.0.0.606.iso volume HD1.3 If the volume you would like to install on does not exist yet, add the create-volume parameter after the command:

674 674


(tmos) # install /sys software image BIGIP-12.0.0.0.0.606.iso volume HD1.3 createvolume When Installing a Hotfix (tmos) # install /sys software hotfix [hotfix_name].iso volume [volume_name] For example:

(tmos) # install /sys software hotfix Hotfix-BIGIP-12.0.0.2.0.644-HF2.iso volume HD1.3 In order to view the progress of the installation use the following command:

(tmos) # show /sys software status

Booting the BIG-IP System Into the New Volume After the installation is done and successful you are ready to boot into the new volume. In order to do so, use the following instructions: 1. 2. 3.

4.

Log on to the WebGUI and navigate to System > Software Management > Boot Locations In the list, select the Boot Location you would like to activate by clicking on it. On the General Properties page you are presented with a before and after state containing the current and the new boot location, version and build. You also have the option to copy the current configuration into the new boot location. Click on Activate in order to activate the partition and reboot the BIG-IP system. The system will reboot onto the new boot location.

If the reboot is successful you will be able to log back into the WebGUI. Do note that this might take several minutes and during this period you will not have any access to the device nor will it pass any client traffic. Therefore, it is always best to do so during a maintenance window if the reboot will affect client traffic.

In order to boot the BIG-IP system into the new volume using CLI, use the following commands:

(tmos) # reboot volume [volume]

675 675


For example:

(tmos) # reboot volume HD1.3 You can also change the active boot location through the Linux BASH shell by typing the following command:

switchboot This will prompt a program within your SSH session where you can select a new default boot location. Once you have selected the new boot location, reboot the device using the following command:

reboot

Rolling Back to a Previous Version There are times where you need to perform a rollback of a newly installed version. This might be caused by a certain feature breaking because of the upgrade, which is critical to the business. In order to do so you just need to activate the previous boot location like we did in the previous instructions.

Handling the Configuration Between Volumes The BIG-IP system does not sync the configuration between volumes. By default, when you install a new Software Image on the BIG-IP system it will automatically copy the configuration of the active volume into the newly installed volume. In most cases this is the desired behavior but there are some scenarios where you want to keep the configuration separate. For instance, your test environment. You may want to test different scenarios through different configurations and having the configuration files automatically transferred might be a problem.

676 676


This can be easily controlled by adjusting the values of two database keys called: ▪ ▪

LiveInstall.MoveConfig LiveInstall.SaveConfig

In the following table you can see the outcome of all combinations of these two keys and how it affects the BIG-IP system: Database Key Combination LiveInstall.MoveConfig = enable LiveInstall.SaveConfig = enable LiveInstall.MoveConfig = disable LiveInstall.SaveConfig = enable LiveInstall.MoveConfig = enable LiveInstall.SaveConfig = disable LiveInstall.MoveConfig = disable LiveInstall.SaveConfig = disable

Behavior This is the default behavior. This will cause the BIG-IP system to install the configuration of the active boot location into the new boot location. The configuration of the target boot location (if there is any) will be re-installed and unaffected after the reimaging is complete. No configuration will be installed on the target boot location. No configuration will be installed on the target boot location.

In order to view the current value of these database keys, use the following commands:

tmsh list sys db liveinstall.moveconfig sys db liveinstall.moveconfig { value “enable” } # tmsh list sys db liveinstall.saveconfig sys db liveinstall.saveconfig { value “enable” } In order to modify these values, use the following commands:

# tmsh modify sys db liveinstall.moveconfig value disable|enable # tmsh modify sys db liveinstall.saveconfig value disable|enable Modify these values before you install the new software image and remember to save the configuration after modifying the values. Do this using the following command:

# tmsh save /sys config There is also another option of copying the configuration from one volume to another. This can be done through CLI using the command cpcfg.

677 677


The command utilises the following syntax:

cpcfg [options][destination_location] Options: –source=SLOT : Get configuration from specified slot (eg: HD1.1) –verbose : Increase verbose level (cumulative) –reboot : Immediately switch to target location after transferring configuration For example:

# cpcfg –source=HD1.2 HD1.3 Please note that the cpcfg command has its limitations which are: ▪

The version of the target volume must be the same or later than the version of the source boot location. If you specify a target volume which has an earlier version than the source, you will be presented by the following error message:

info: New version (11.4.0) is not >= originating version (11.4.1); configuration is not compatible. configuration roll-forward desired but not compatible. ▪

You cannot specify the currently active volume as the target volume. If you do so you will be presented by the following error message:

Copy to active location (HD1.6) is not supported. When using cpcfg on a VIPRION system, you must run the cpcfg from the cluster shell (clsh) on the primary blade. On VIPRION systems you need to use the cpcfg command in order to transfer the configuration between volumes. Remember what the cpcfg command can be used for.

Best Practices When Upgrading a BIG-IP System in a HA-pair The great thing about configuring your BIG-IP systems in a High-Availability pair is the ability to have maintenance windows with very low or non-existent impact. It really depends if you have configured your HA pair in a stateful failover or not. When you are upgrading your BIG-IP systems in an HA-pair, F5 recommends you to perform the upgrade on the Standby unit first. When the installation is complete, fail over the traffic from the Active unit to the newly installed Standby unit. You do this in order to verify that your applications do not have problems with the new version.

678 678


When the newly installed and presently active unit has been verified and everything is working as it should, do the same procedure on the other standby unit. This will make sure that both systems are upgraded in their standby state and are verified whenever it is processing client traffic. The complete procedure is summarised in the following list: ▪ ▪ ▪ ▪ ▪ ▪

Perform the installation on the Standby Unit (Unit A) Once the installation is complete, verify that the applications still work after the upgrade by failing over the traffic from the Active Unit (Unit B) to the Standby Unit (Unit A) Let the newly Active Unit (Unit A) process client traffic for a moment in order to verify the applications. Test your most critical applications. Once the Active Unit (Unit A) has been verified, perform the installation on the Standby Unit (Unit B) which was previously active. When the installation is complete, perform another failover from the Active Unit (Unit A) to the Standby Unit (Unit B) in order to verify the traffic. Again, let the client traffic pass through the newly installed Active Unit (Unit B) and test your most critical applications. If everything is working as expected, then the upgrade was successful. Even though F5 does not include this step in their best-practices, putting the BIG-IP devices which are being upgraded into Forced Offline mode is something that is highly recommended in order to make sure that the device does not assume the role as Active. Therefore, before you start upgrading the Standby unit, activate the Forced Offline feature and perform the upgrade. Once it is complete, make sure that it is functioning correctly. After this is done, release the Forced Offline feature and fail over the traffic to it. Proceed with caution when using the Forced Offline feature. For instance, when managing your BIG-IP system using the self-IP addresses, for VIPRION systems (running as both vCMP hypervisors and regular BIG-IPs) and vCMP guests, the connection is terminated and does not allow new connections. For more information see the AskF5 article K15122: Overview of the Force Offline option.

Potential Problems When Upgrading Your BIG-IP system There are some potential problems that can arise when upgrading your BIG-IP system. Since booting your BIG-IP system into the new boot location is part of the upgrade, having a bigip.license file with a Service Check Date out of date is one of the potential problems. If this happens you will just have to re-activate your license and the configuration will be loaded and you will be up and running again. Another problem that might arise is having insufficient disk space. This is more common on BIG-IP VE editions but can also happen on appliances. We covered this in the section entitled: Overview of the Disk Management Process. Before you upgrade your BIG-IP system you should make sure you have enough disk space for the new Software Image in its uncompressed state. These issues are fairly easy to avoid by just having a good upgrade procedure and most of them are documented in each release note.

679 679


One issue that is not that common but still happens from time to time is the upgrade script validation where loading the UCS file during the installation fails. You will be presented by this error when utilising the command:

tmsh show sys software

failed (UCS application failed; unknown cause.) In order to determine what caused this error you will have to review the log file /var/log/liveinstall.log. When you are installing a new Software Image or Hotfix on your BIG-IP system, it uses the application called tm_install. tm_install writes all of its operations to the logfile /var/log/liveinstall.log. Therefore when an installation fails, this is the logfile you need to review in order to determine what caused the failure. One of tm_install’s default operations is to extract a fresh UCS archive from the active boot location and transfer it over to the boot location where it is extracted and installed. One problem that can occur during this process is that the configuration from the active boot location is not compatible with the new boot location causing the configuration installation to fail. This is presented in the following picture:

680 680


If you are presented with this error, you can also verify the kernel or ltm log and see if you can find any more information during the time-frame when the configuration installation fails. If you cannot find any more information in those log files, check the release notes for that particular Software Image/Hotfix and see if there are any known issues and correlate with your BIG-IP system and see if you are affected by these. If you cannot find any related issues, then the next step would be to open up a support ticket with F5 support.

Enterprise Manager (EM) The Enterprise Manager is a product that assists the BIG-IP administrator with handling administrative tasks of multiple F5 devices. These administrative tasks include: ▪ ▪ ▪ ▪ ▪ ▪

User account management. Software installation and upgrades. Configuration archival and restoration. Certificate monitoring, security policy management. Software image storage. Performance monitoring.

Enterprise Manager also gathers and stores information from the devices it is configured to manage and this is accessible from a web-based GUI. As of this book’s writing, the latest version of Enterprise Manager is 3.1.1 which supports the following F5 products and versions: ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Enterprise Manager version 1.6 to version 1.8 Enterprise Manager version 2.x.x and later Enterprise Manager Virtual Edition (VE) version 2.2.0 and later BIGIP versions 11.x.x BIGIP versions 10.0.1 and later in the BIG-IP version 10.x.x software line BIGIP Local Traffic Manager Virtual Edition (VE) version 10.2.x and later BIGIP version 9.3.1 to BIG-IP version 9.4.x BIGIP Secure Access Manager version 8.0.x WANJet version 5.0.x

Performing Basic Device Management Adding Devices to Enterprise Manager Before you can start managing your F5 devices you first need to add them to your Enterprise Manager, this process is called discovery. Once they have been discovered they will be displayed in the Device List screen. The number of devices that you can add to your Enterprise Manager is determined by your license. The Enterprise Manager itself or its peer is not counted towards the licensing count.

The Discovery Process In order to add new devices to the Enterprise Manager you create a Discovery Task. In this discovery task you search using specific IP addresses or specifying an IP subnet. During the discovery process the Enterprise Manager tries to log on to the available devices by using administrative credentials that you supply when creating the discovery task. If the discovery succeeds it will proceed with adding them to the Device List.

681 681


Enterprise Manager attempts to log on to the F5 devices using port 443. In order for the discovery to succeed you need to make sure that the Enterprise Manger can access the devices using this port.

Discovering BIG-IP devices In order to discover BIG-IP devices, please use the following instructions: 1. 2. 3. 4.

5.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and click Devices. On the Devices screen, click Discover. This will open up the Discover page. For the Scan Type, choose how you want the Enterprise Manager to discover your devices. You have the following options: a. Address List b. Subnet When using Address List for each device perform the following steps: a. In the IP address box, type the IP address of the device. b. In the User Name and Password box, type the username and password for the admin credentials. c. When done, click Add.

When discovering VIPRION platforms you must use the floating cluster IP address and not the cluster member IP addresses.

6.

When using Subnet (class B or C network) perform the following steps: a. In the IP address box, type the IP address of the device. b. In the Network Mask box, type the netmask to use when searching the network. c. In the User Name and Password box, type the username and password for the admin credentials. These need to be the same for all devices.

682 682


7.

When you have entered the information, click Discover. This will launch the Task Properties screen. The successfully discovered devices will appear in the Properties area and the list keeps refreshing until all addresses in the range have been scanned or until you click Cancel Pending Items.

Discovering non-BIG-IP Devices. When discovering non-BIG-IP devices including WANJet systems, it is recommended that you first create a CSV file that contains the device IP addresses, user names and passwords. CSV stands for Comma-Separated Values and the text inside a .csv file is specially formatted. In our case each line represents a device and each value in one line has its own value:

[device], [username], [password] The value [device] represents the IP address of the device, [username] represents the username of which the Enterprise Manager should use to log on to the device and [password] represents the password of that user. It could for instance look something like this: 172.16.1.100, admin, Passw0rd 172.16.1.101, admin, Passw0rd 172.16.1.102, admin, Passw0rd 172.16.1.103, admin, Passw0rd 172.16.1.104, admin, Passw0rd 172.16.1.105, admin, Passw0rd

683 683


In order to discover BIG-IP devices using a CSV file, please use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and click Devices. On the Devices screen, click Discover. This will open up the Discover page. Click on Import from File ‌ Click on Browse‌ Locate the CSV file you would like to import and click Open. The name of the file will appear in the File Name box. Click Import. This will launch the import process and once it has completed, a list of the imported IP addresses and user names will appear in the Address List box. Click Start Task to start the discovery task which will add the devices to the Device List.

Performing Basic Tasks on Managed Devices Once you have finished the Discovery Process and added your devices to the Device List you can start performing basic management tasks which include verifying and testing device communication, rebooting devices, deleting devices and specifying device refresh interval. All of these will be explained in the following sections.

Verifying and Testing Device Communication When the discovery process has been successful and you have added your devices to the Device List the Enterprise Manager can see the device on that IP address. However, this does not necessarily mean that the device can communicate back to the Enterprise Manger. In order to verify this, you need to first confirm that the managed device has the correct IP address for the Enterprise Manager. When the managed device has been discovered it saves the IP address that the Enterprise Manager used for the discovery. In order to check the EM Address that the managed device uses, please use the following instructions:

Verifying the Enterprise Manager IP Address on a Device 1. 2. 3. 4. 5.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and click Devices. On the Device List screen, click on the device for which you want to verify the Enterprise Manager IP address. Change the Device Properties from Basic to Advanced. Under EM Address, verify that the IP address is correct. This is the IP address that the device uses to communicate with the Enterprise Manager.

684 684


In order to verify that the connection works from the device to the Enterprise Manager we’ll have to SSH to the device and from there try to connect to the Enterprise Manager. Note that this will require Bash (CLI) access to the device. In order to verify the connection to the Enterprise Manager please use the following instructions:

685 685


Verifying Device Connection to Enterprise Manager 1. 2. 3.

Launch a terminal client such as PuTTY and SSH to the BIG-IP device on port 22. Log in using the root account. Type the following command:

config# telnet [EM_address] 443

[EM_address] represents the IP address of the Enterprise Manager and is the same as the one you discovered in the previous instruction. 4.

If the command is successful, you should be prompted with the following result:

Trying 172.16.1.41… Connected to 172.16.1.41 (172.16.1.41). Escape character is ‘^]’.

If you receive the message connection refused, then you may need to change the IP address of the Enterprise Manager or verify that there is no firewall between the device and Enterprise Manager that blocks the traffic.

Rebooting Managed Devices In some scenarios, you may need to reboot devices and this is possible to do from the Enterprise Manager. One example where you need to reboot a device is after you have performed an upgrade. Since the upgrade will be installed on a new partition, in order to complete the upgrade the device needs to be rebooted into that new partition. In order to reboot a device into a new partition please use the following instructions:

To Reboot a Device Into a Different Boot Location 1. 2. 3. 4. 5. 6.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and click Devices. On the Device List screen, click on the device for which you want to reboot. Hover over the Properties tab and select Boot Locations. Select the new boot location by selecting the Inactive boot location. Once selected click Reboot. On the pop-up screen click OK.

686 686


Managing Licenses One common task that can be quite time consuming is renewing or adding an initial license. Enterprise Manager has the possibility to completely automate these tasks. The devices that are in need of a new license will automatically be displayed in the Device List and using the License Device wizard you can automatically license or relicense as many devices as you need. The License Device wizard will let you select the devices that you want to update/add the license for, view and accept the End User License Agreement (EULA) and create a task that will update/add the license for the devices that you have selected. Once the task is run, the entire process will be automated by the Enterprise Manager. In order to create a device licensing task, please use the following instructions:

Starting a Device Licensing Task 1. 2. 3. 4. 5.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and click Tasks. Once the Task List screen opens, click New Task. In the Devices area, select License Device and click Next. In order to narrow down each device, apply a Device Filter.

6.

Check each box next to the device that you would like to license and click Next.

687 687


7.

8.

In the following steps, the Enterprise Manager will retrieve the License Information from the F5 server. This includes the End User License Agreement (EULA). If the devices are ready to be licensed, the Contact to License Server will state Success. Alternatively, you might have to accept the EULA before licensing the devices. If EULA is required, then a Review EULA screen opens. If accepting EULA is not required, then skip to the Configuring Task Options and running the Task instruction.

Accepting the EULA for Devices When the Review EULA screen appears it will present you with all available license agreements and you are able to switch between all of them (if there are more than one). In order to accept the EULA for the devices, please use the following instructions: 1. 2. 3.

To accept the EULA for all devices listen in the Applies to Device(s) box, check the box next to Accept all EULA’s and continue with the device licensing. Repeat the process if you have more EULA’s to accept. Once you are done, click Next to move to the Task Options screen. Accepting the EULA is generally only done when first licensing the device. However, if there has been a change to the EULA, you might be required to accept it again even if you are only performing a relicensing of the device.

Configuring Task Options and Running the Task 1.

2. 3. 4. 5.

On the Task Options screen, you will be presented with different options on how the task should be performed. The options you choose is entirely up to you as an administrator but F5 recommends that after the license has been activated/reactivated the device should be rebooted. Once you have selected each option, click Next. On the Task Review page, you will be presented with the devices that this task will be run on and the Task Options. When you are ready to start the task, click Start Task. When the task is done, you will be presented with the Task Summary area which will display the results.

Collecting Information for F5 Support When opening a support case with the F5 support you are typically required to collect basic system and configuration information along with logs and other useful information. Using the Enterprise Manager, you are able to collect this information centrally rather than logging into each device individually and this is a real time saver. In order to gather this data, we start the Support Information wizard. To automatically upload the support information to an active case at F5 support you need to have already opened a case to be able to specify the case number. Therefore, before continuing the following steps you will need to open a case first.

688 688


Starting a Support Information Gathering Task 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.

18. 19. 20. 21.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and click Tasks. Once the Task List screen opens, click New Task. In the Support area, select the Gather Support Information option and click Next. Specify the Case Number. If you have some more information that can be beneficial for the support engineers, you can add this to the Additional Information section. Click Next to move to the next page. Under the Device Data section, in the upper right corner click Add. This will open up the Add Devices screen. Use the filters to locate the device you which to gather data from. Select it by clicking the checkbox next to the device. Once you have selected all devices, retrieve the data by clicking Retrieve Device Data. When the task is complete, click Finished. Under the File Attachments section, you are able to attach additional files that might be beneficial for the F5 support engineers. To add attachments, click, Attach. Click on Browse and select the file you which to add. When done, click Import. When you have gathered Device Data and added File Attachments, click Next to continue to the next page. On the Upload Configuration / Review page you are able to select where the information should be uploaded. In the Destination area, select one of the following options: a. F5 Support Site (secure) – This will upload the information to F5 support using SFTP and link it to the case number. b. F5 Support Site - This will upload the information to F5 support using FTP and link it to the case number. c. Custom Location – This allows you to upload the information to a custom FTP server and it will prompt you for additional information. d. Local Download – This saves the gathered information in a compressed file that you can download to your local client. No additional information is required. If you select F5 Support Site you will need to ensure that the Email address specified will match the email address that was used to open up the support case. Once you have selected the upload method and added the additional information, click Next to move to the next page. On the next page the task will automatically start. When the task is complete, click Finished.

689 689


Managing UCS Archives UCS archives are compressed files that contain all configuration files that are required to restore a BIG-IP device into the state it was when the UCS archives was created. Using the Enterprise Manager you can automatically gather UCS archives from all of your managed devices along with some other features. These features include: ▪ ▪ ▪ ▪

Comparing multiple versions of UCS archives. Searching for specific configuration elements. Restoring UCS archives for managed devices. Modifying and deleting configuration archives.

All of these will be covered in the following sections.

Maintaining Rotating UCS Archives The Enterprise Manager can be used to create and store UCS archives on demand but also at regularly intervals. These are known as Rotating Archives and are used to ensure that there is always a backup available in case you need to revert a configuration change or replace a faulty device. Another advantage of Rotating Archives is that the Enterprise Manager can detect a configuration change and will automatically schedule the creation of a new UCS archive. This will ensure that the most recent UCS archive contains the latest changes. Enterprise Manager works like this: If you schedule the creation of a UCS archive on a daily basis, the Enterprise Manager will only create a UCS archive for each day that the configuration has actually changed. This is done in order to eliminate the risk of saving duplicate archives that contain the exact same configuration. However, it will store old UCS archives and as it creates new ones the oldest archives will be removed. By default, the Enterprise Manager stores up to 10 rotating device archives and 10 saved (pinned) archives. Therefore, you will always have 10 archives to choose from. Saved Archives (Pinned) will only be removed if the BIG-IP administrator manually deletes them. If you try to create more than 10 saved (pinned) archives the system will warn you and you will have to delete at least one pinned archive in order to create a new one.

Increasing the Maximum Rotating Archives If there is a need to store more than 10 rotating archives per device, there is a way to change this. But note that this can affect the disk space on the Enterprise Manager. In order to increase the Maximum Rotating Archives please use the following instructions:

Changing the Default Archive Options 1. 2. 3. 4.

Open up a browser session to the Enterprise Manager and login using the admin credentials Navigate to Enterprise Management and go to Options > Archives. On the Maximum Rotating Archives option, change the value from 10 to the desired value. Click Save Changes.

690 690


When lowering the Maximum Rotating Archives option this will automatically delete the oldest UCS archives until it has reached the new limit.

Creating Rotating Archive Schedules 1. 2. 3. 4.

5. 6. 7. 8. 9.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and go to Tasks > Schedules > Archive Collection In the upper right corner, click Create. In the Check for Changes box, select the frequency that you want the Enterprise Manager to check the managed device configuration. a. When selecting Now, you do not have to add any additional information. b. When selecting Daily, add the time of the day you want the archive collection to be run. c. When selecting Weekly, add the day of the week and time that you want the archive collection to be run. d. When selecting Monthly, add the day of the month and time you want the archive collection to be run. In the Archive File Name box, specify the name of the scheduled archive collection. This will appear in the Scheduled Archive Collection list. In the Private Keys list, select whether you want to include or exclude the private keys in the UCS archive. In the Devices box, move the devices that you wish to run the scheduled archive collection from Available to Assigned. If you have configured Devices List you can add these as well by moving them from Available to Assigned. When you are done, click Finished.

691 691


Modifying Rotating UCS Archive Schedules If you ever need to modify an already created scheduled archive collection you can easily do this by navigating to Enterprise Management > Tasks > Scheduled > Archive Collection and click on the scheduled task. After that you can perform all the necessary changes and save them by clicking Save Changes.

Maintaining Specific Configuration Archives Rotating Archives gives you the benefit of always having an up-to-date archive available. However, there are times where you might want to save specific configuration archives. This is done by creating saved or pinned archives. Pinned UCS Archives collect the same configuration files as rotating UCS archives but are instead pinned to the Enterprise Manager. This meansning that they are saved until manually deleted.

692 692


A UCS archive will become pinned either by creating a pinned UCS archive or if you enable the pinned flag on an already existing UCS archive. One reason this feature can be useful is for software upgrades/hotfix installations. Having a pinned UCS archive available will ensure you that you will have a saved configuration from a specific point in time.

Creating a New Pinned Archive 1. 2. 3. 4. 5. 6. 7. 8. 9.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and go to Devices > Device List. Click on the device you wish to create a new pinned archive. Click on Archives. In the upper right corner, click Create. In the File Name box, type the name of the UCS archive. In the Private Key list, select whether you want to include or exclude the private keys in the UCS archive. Click Create. When the UCS archive is created note that the Pinned option is set to Yes.

Pin an Already Existing Archive 1. 2. 3. 4. 5.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and go to Repository > Archive List. Check the box of the UCS archives you wish to pin. Once you have selected all UCS archives you wish to pin, click Pin Archive. On the pop-up screen, click Pin. Notice that the Pinned option has changed from No to Yes.

Restoring UCS Archives for Managed Devices The Enterprise Manager may not only be used to create UCS archives but to restore them as well. This can save you time whenever you must restore the configuration for all of your managed devices, as you can do it without having to log into each device individually. When restoring a UCS archive to a managed device, the configuration contained in the UCS archive will be restored on the managed device meaning it will overwrite the current configuration.

Performing a UCS Restoration for a Managed Device 1. 2.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and go to Devices > Device List.

693 693


3. 4. 5. 6. 7.

Click on the device you wish to restore a UCS archive. Click on Archives. Click on the name of the archive you wish to restore. In order to restore the UCS archive, click Restore… On the pop-up screen click Restore.

Enterprise Manager can only restore a UCS archive to the device it was originally created on.

Deleting UCS archives There are times where you need to delete UCS archives. They might not be relevant anymore or you need to free up some disk space. In order to delete UCS archives please use the following instructions: 1. 2. 3. 4. 5.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and go to Repository > Archive List. Check the box of the UCS archives you wish to delete. Once you have selected all UCS archives you wish to delete, click Delete. On the pop-up screen, click Delete.

Comparing Multiple Versions of UCS Archives When managing multiple versions of UCS archives you will most likely encounter situations where you want to compare two UCS archives in order to find configuration changes. You might have an on-going issue in your environment that could have been triggered by a configuration change. Being able to quickly find this change is crucial in order to mitigate the impact. Using the Compare Device Configurations wizard, the Enterprise Manager will enable you to compare either the existing configuration to a UCS archives or two stored UCS archives.

Creating an Archive Comparison Task 1. 2. 3. 4. 5.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management and go to Tasks In the upper right corner, click New Task… In the Configuration Archives area check the Compare Archive option. On the First Source Device & Configuration Selection, from the Device list select the device that you would like to use as source device. 6. Select the source you wish to use for the comparison: a. When selecting the current configuration – Add a checkbox next to the Current Configuration option. b. When selecting a UCS Archive – Add a checkbox next to the UCS archive you wish to use as the source archive. 7. When a source has been selected, click Next. 8. On the Second Source Device & Configuration Selection, from the Device list select the device that you would like to use as source device. 9. Select the UCS archive or the current configuration. 10. When the Second Source has been selected, click Next.

694 694


11. If there is no need to compare the Private keys, uncheck the Include Private Keys options. 12. To start the task, click Start Task. 13. When the task is complete, the Task Summary will present a summary of the files that are different between the two sources. You can adjust the output by changing the Summary Filter. Under the Comparison column you can also view the differences by clicking View‌

Searching for Specific Configuration Elements One major benefit of the Enterprise Manager is its ability to search for specific configuration elements. This is useful not only for troubleshooting but also for decommissioning of applications in your environment. In order to search for specific configuration objects, please use the following instructions: 1. 2. 3. 4. 5. 6. 7.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Configurations > Search Configurations. In the Keyword box, type the word you want to search for in the configuration files. Click Search. You can filter the result by typing in the Matching Objects box and clicking Filter. Click on the object in the Matching Objects list in order to view its configuration. To clear the search, click Reset.

Managing Software Images Upgrading and installing hotfixes on your BIG-IP system is part of your role as an BIG-IP administrator and is necessary to minimise the number of bugs and known security vulnerabilities.

695 695


If you are managing a large environment of BIG-IP systems, these maintenance tasks can be very time consuming. The Enterprise Manager will assist you with this and work as a centralised software management platform where you can push out images and install them on your BIG-IP system which saves valuable time. Both the distribution and installation of the images can be separated into different tasks meaning that you can distribute all images before the maintenance window which will give you more time to actually install and troubleshoot the installation if necessary.

Reviewing Available Software Downloads Just like on a BIG-IP system, in order to add the software images to the Enterprise Manager you will have to go to downloads.f5.com and download both the hotfixes and images you would like to install.

Adding and Removing Software Images/Hotfixes on the Enterprise Manager After you have downloaded the software images/hotfixes from downloads.f5.com you will have to add it to the Enterprise Manager software repository.

Adding an Image/Hotfix to the Software Repository When importing a software image/hotfix to the Enterprise Manager you must leave the web browser on the Import screen until it has fully transferred the file. If you navigate to a different page, the upload process will stop. If you need to navigate while you upload, then you will need to open up a new browser session. 1. 2. 3. 4. 5. 6.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Repository > Software Image List or Hotfix Image List. Click Import. In the File Name box, click Browse. Select the Image/Hotfix and click Open. Click Import. A progress bar will appear. Do not navigate to another page or close your browser until it is complete.

7.

You will see the image in the Software Image List once the transfer is complete.

Removing an Image/Hotfix to the Software Repository When removing an Image/Hotfix from an Enterprise Manager it will be deleted from its database. You will have to import the image/hotfix again if you wish to use it for future installations. 1.

Open up a browser session to the Enterprise Manager and login using the admin credentials.

696 696


2. 3. 4. 5.

Navigate to Enterprise Management > Repository > Software Image List or Hotfix Image List. Click the checkbox next to the images/hotfixes you wish to remove. When done, click Delete. Confirm the deletion by clicking Delete once more.

Copying and Installing Software to Managed Devices When copying and installing software images to managed devices you can choose to deploy to a single unit or an entire group of devices. There are presently two installation wizards used for copying and installing software on managed devices and the one you use is dependent on the software version or which system you are installing the software on. The installation wizards are: ▪

Software Image Copy and Installation wizard – This wizard is generally used for BIG-IP systems running a TMOS version later than 10.x or later than Enterprise Manager 2.x.

Legacy Software Image Installation wizard – This wizard is used for BIG-IP systems running TMOS 9.x as well as WANjet version 5.0, Secure Access Manager version 8.0 and Enterprise Manager 1.x In this book we’ll only cover the newer Software Image and Copy Installation Wizard as most companies are no longer using version 9.x

Copying Software to Be Installed at a Later Date Sometimes it can take a very long time to copy the installation software to the managed device. In these scenarios, it is very unwise not to have copied over the installation software prior to the actual maintenance window as you will waste valuable maintenance time copying files that can be done beforehand. In order to copy installation software to a managed device please use the following instructions: 1. 2. 3. 4.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Tasks Once the Task List screen opens, click New Task. In the Software Installation area, select the Copy and Install Software and Hotfix Images option and click Next. 5. From the Software Image list, select the image you would like to copy. 6. From the Hotfix Image list, select the hotfix you would like to copy. 7. From the Task Type list, select Copy Image(s) Only. 8. From the Device List list, select the device(s) you wish to distribute the images to. 9. Adjust the Device Filter in order to find the managed devices you wish to copy the software to. 10. Click the checkbox of the devices you wish to copy the software. 11. When done, click Next.

697 697


12. On the Task Options page, from the Device Error Behavior list, select which action the Enterprise Manager should take in the event that the task fails on a managed device. 13. When done, click Next. 14. On the Task Review page, verify that the task details are correct. 15. When done, click Start Task. 16. The task will start and when done you will be presented with the results.

Installing a Software Image 1. 2. 3. 4.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Tasks Once the Task List screen opens, click New Task. In the Software Installation area, select the Copy and Install Software and Hotfix Images option and click Next. 5. From the Software Image list, select the image you would like install. 6. From the Hotfix Image list, select the hotfix you would like to install. 7. From the Task Type list, select Install Image(s) Only. 8. From the Device List list, select the device(s) you wish to install the software on. 9. Adjust the Device Filter in order to find the managed devices you wish to install the software on. 10. Click the checkbox of the devices you wish to install the software. 11. When done, click Next.

698 698


12. On the Task Options page, from the Configuration list, select whether or not you want the configuration to be copied into the new partition. 13. From the Post-Install Run Location list, select whether or not you the managed device to boot into the newly installed partition. 14. From the Configuration Archive list, select whether or not you want the UCS archive to include the private keys. 15. From the Device Error Behavior list, select which action the Enterprise Manager should take in the event that the task fails on a managed device. 16. When done, click Next. 17. On the Task Review page, verify that the task details are correct. 18. When done, click Start Task. 19. The task will start and when done you will be presented with the results.

It is possible to perform both a copy and install task. To do this just select the task type Copy and Install Image(s).

Monitoring and Alerts As a BIG-IP administrator your job does not only include performing certain tasks, you also need to make sure that the tasks are performed successfully and that the managed systems are healthy. The Enterprise Manager will assist you with both. Using the task list you can view all tasks in progress and those that are finished. You also have the possibility to create alerts for when specific tasks fail or when a device status has changed.

699 699


Managing the Task List In order to keep track of all tasks, the Enterprise Manager stores all tasks, successful or failed, in the task list. As a BIG-IP administrator you can monitor this list to get a good view of all your tasks. The Task List is located under Enterprise Management > Tasks > Task List. Here you can view all tasks performed, if it has received any errors, the current progress and when it was initialised.

All of these tasks remain in the list until you decide to delete them. However, even though the task is deleted from the task list, the Enterprise Manager still keeps an audit record for when the task was initiated. To remove a task, simply click the checkbox next to the task and click Delete. You have the possibility to view additional details of the task by clicking on the task and under Task Summary and clicking Details..

Overview of Alerts The Enterprise Manager can be configured to send out alerts when certain events happen. The alerts can be applied to individual devices or an entire group of devices. It is also possible to create alerts for the Enterprise Manager itself. When the event are triggered, the Enterprise Manager can perform several actions including sending out an email, sending an SNMP trap and sending a syslog event to a remote server. The type of alerts that can be configured include: ▪

Statistical data thresholds exceeded – This is used for systems that support statistics collection. You create a statistics data threshold alert that triggers when the data remains out of range for a set number of seconds.

Device status change – This alert triggers when a device receives a certain device status. These include, Active Mode, Standby Mode, Offline Mode, Forced Offline Mode, Impaired and Unreachable for a set number of minutes. You can choose one or more status changes. Certificate expired or near-expiration – This alert lets you monitor all certificates on your managed device. You can define how many days in advance that the Enterprise Manager will send out an alert.

700 700


Completed software, hotfix, or attack signature image installations – This alert will trigger once a software, hotfix or signature image installation has successfully completed.

Failed software, hotfix, or attack signature image installations – This alert will trigger once a software, hotfix or signature image installation has failed.

Clock skew between the Enterprise Manager and managed devices – It is very important that the Enterprise Manager and the managed device have a synchronised clock. This is because when a new device is added to the Enterprise Manager it creates a certificate that it is using to authenticate itself to the managed device. If the managed device and the Enterprise Manager does not synchronise every 15 minutes this certificate will become invalid that can result in the Enterprise Manager losing its privileges to the device. This alert will check the system clocks every 10 minutes and trigger if the clocks are out of sync.

Failed rotating archive creation – This alert triggers whenever a scheduled UCS creation fails.

Setting Alert Default Options You will need configure the Enterprise Manager in order for it to perform the alert actions defined in the alert. These settings are configured under Enterprise Management > Options > Alerts. Here you can configure email recipient and syslog server address. In order to configure the Alert Default Options please use the following instructions: 1. 2. 3. 4. 5. 6. 7.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Options > Alerts. In the Email Recipient box, type the email address you wish the Enterprise Manager should email when an alert is triggered. From the Include EM Host Name in Email list, select if you want the Enterprise Manager’s hostname should be included in the email. In the Syslog Server Address box, type the IP address of the syslog server you want the Enterprise Manager to send alerts to. In the Maximum History Entries box, type the number of history entries that the Enterprise Manager should store in its Alert History list. When done, click Save Changes.

If you need the Enterprise Manger to send to multiple email addresses, use an alias and then configure multiple addresses on your email server.

701 701


Creating Alerts for Enterprise Manager Once we have configured the Alert Default Options, we are ready to create some alerts. In order to maintain good health for your Enterprise Manager you also have the possibility create system alerts. These alerts will trigger whenever the Enterprise Manager’s CPU, disk or memory usage exceeds a particular threshold. In order to create a system alert please use the following instructions: 1. 2. 3. 4. 5. 6.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Alerts > EM Alerts. For the Conditions setting, check the boxes for the conditions you want to track with alerts. In the threshold boxes, type the threshold that you wish the alert to trigger on. Under the EM Alert Actions area, select the actions you wish the Enterprise Manager to take when the alert is triggered. Click Save Changes. When using EM Alerts you can sometimes receive multiple emails, SNMP traps, syslog events or alert history entries because of the fact that CPU and memory may spike repeatedly.

Creating, Modifying, and Deleting Alerts for Devices One of the most powerful benefits of the Enterprise Manager is its ability to monitor managed devices and trigger alerts when something is wrong. In order to create an alert for a managed device please use the following instructions:

Creating a Device Alert 1. 2. 3. 4. 5. 6.

7. 8.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Alerts > Device Alert List. Click Create. In the Name box, type the name of the alarm. From the Alert Type list, select the alarm you wish to create. Depending on the alert type, the page might change asking for additional details. For the Action setting check the box next to the actions you would like the Enterprise Manager to take if the alert is triggered. a. When choosing email you can select the default email recipient or use a new one. b. When choosing syslog you can select the default syslog server address or use a new one. Under the Alert Assignments area, select the specific devices or device list that you want the alert to be assigned to. When done, click Finished.

702 702


Sometimes you will need to modify or delete an alert. Perhaps you need to adjust the action list or add/remove devices to the alert assignment. In order to modify or delete an alert please use the following instructions:

Modifying a Device Alert 1. 2. 3. 4. 5.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Alerts > Device Alert List. Click on the name of the alert that you wish to modify. Change any of the configuration details. When done, click Save Changes.

703 703


Deleting a Device Alert 1. 2. 3. 4. 5.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Alerts > Device Alert List. Check the box next to the Alert you wish to delete. Click Delete. On the Delete Confirm page, confirm the deletion by clicking Delete.

Monitoring Certificates As we have covered previously in this book, the BIG-IP system is a great tool for centralised management of SSL certificates. On the BIG-IP system there are two types of certificates; traffic certificates and system certificates. Traffic certificates are server certificates that a managed device can use to establish an SSL connection with a client or server. System certificates are the certificate that enables the user to log into the BIG-IP system WebGUI and necessary for two BIG-IP systems to communicate with each other. It is therefore very important to keep the system certificates valid. The Enterprise Manager provides a summary of vital certificate information for each managed device that has certificate monitoring enabled. This is enabled by default, but if you want to disable it for a specific device or device list please use the following instructions:

Disabling Certificate Monitoring 1. 2. 3. 4.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Options > Certificates > Monitored Certificates. In the Devices or Device List box, click on the devices/device lists in the Enabled box that you wish to disable and click on the >> button in order to move it to the Disabled box. When done, click Save Changes.

Enabling Certificate Monitoring 1. 2. 3. 4.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Options > Certificates > Monitored Certificates. In the Devices or Device List box, click on the devices/device lists in the Disabled box that you wish to enable and click on the << button in order to move it to the Enabled box. When done, click Save Changes.

704 704


Viewing Certificate Information The certificate information list is divided into System Certificates and Traffic Certificates that contains the following information: ▪ ▪ ▪ ▪ ▪ ▪

The status of the certificate The name of the certificate The device on which the certificate is configured The common name of the certificate The organisation name The certificate expiration date/time

Accessing the Certificate Screen 1. 2. 3.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Configurations > Monitored Certificates. By default this will open the System Certificate List. In order to access the Traffic Certificate list click on the Monitored Certificates bar and click on Traffic Certificate List.

705 705


The Certificate Status Flag There are three types of status flags available which are based on the certificate expiration date/time. They are presented in the following table: Status Flag

Expiration Status This flag indicates that the certificate has expired. If this certificate is used for client traffic, then the user will receive a certificate warning.

(Red) This flag indicates that the certificate expires in 30 days or less. It is still valid but as an BIG-IP administrator you will need to renew the certificate before it expires. (Yellow)

This flag indicates that the certificate is valid for at least 30 more days. (Green)

Creating a Device Certificate Alert As we have previously mentioned, it is possible for the Enterprise Manager to log or send out alerts when a certificate is about to expire. In order to create a device certificate alert please use the following instructions: 1. 2. 3. 4. 5. 6.

Open up a browser session to the Enterprise Manager and login using the admin credentials. Navigate to Enterprise Management > Alerts > Device Alert List. Click Create. In the Name box, type the name of the alarm. From the Alert Type list, select the Certificate Expiration. In the Conditions area, select the conditions you wish the alert to trigger. You can also enter a customised number of days. 7. In the Action area, check the boxes for the actions you wish the Enterprise Manager to take when the conditions are met. 8. When choosing email or syslog, you can either use the default or enter a new value. 9. In the Alert Assignments area, select the devices or device lists you wish to assign the alert. 10. When done, click Finished.

BIG-IQ Like with all other products, development is essential in order to introduce new features and keep products up to date. BIG-IQ is the new centralised management product that is out to replace Enterprise Manager. Before we dive into the structure of BIG-IQ and its features, we would just like to highlight that this product is still under heavy development. When it was first released it was missing some of the features that Enterprise Manager offered but as each new version is released, new features are also added. As of this book’s writing, the current release of BIG-IQ is 5.1 and the following texts are based upon that version.

706 706


The BIG-IQ Panels BIG-IQ offers more than just device, backup and licensing features. Its interface is built upon what is known as panels and each panel corresponds to a BIG-IQ feature or more specifically, a BIG-IP module. This means that you will be able to centrally manage not only your devices but also each module running on these devices. Depending on the amount of panels and the resolution of your screen, some panels can be collapsed and presented as colored bars on either side of the screen. In other words, you will be able to view and manage your entire F5 environment through what they refer to as a single pane of glass. In the following sections we’ll discuss each panel and what they are able to do.

The BIG-IQ Device/System Management Panels Simply put, the BIG-IQ Device/System Management panels corresponds to the same features as the Enterprise Manager. These panels enable the BIG-IP administrator to discover, monitor, upgrade and backup for up to 200 physical, virtual or vCMP based BIG-IP devices. In summary, you will be able to perform the following tasks: ▪ ▪ ▪ ▪

Manage multiple devices including creating inventory reports. Backup and Restore the configuration of managed devices. Perform Upgrades of managed devices. Monitor the managed devices which includes SSL Certificates and HA status. In this chapter we’ll only focus on the BIG-IQ Devices/System Management panels as these are the ones that matches the features that Enterprise Manager provides.

707 707


The BIG-IQ Application Delivery Controller (ADC) Panel The BIG-IQ ADC panel represents the BIG-IP LTM module and lets you centrally manage all of your LTM devices and their corresponding attributes. These include VIPs, pools, members, nodes and iRules. The ADC panel also lets you monitor your application traffic thus when you are experiencing issues you can quickly pinpoint which servers are affected and solve the problem. You can also evaluate statistics and generate reports. In summary, you will be able to perform the following tasks: ▪ ▪ ▪ ▪ ▪

Configure LTM devices. View and monitor LTM specific objects. Manage pools and nodes. Monitor both physical and virtual LTM devices. Configure and validate large-scale systems.

The BIG-IQ Web Application Security Panel Since security is very important, being able to quickly implement new security policies on multiple devices is key in order to prevent any malicious user attacking your environment. The BIG-IQ Web Application Security panel represents the BIG-IP ASM module and it lets you both deploy and import BIG-IP ASM policies. Each configuration change will be logged to the audit log which is beneficial when multiple users are administrating the system. In summary, you will be able to perform the following tasks: ▪ ▪ ▪

Import BIG-IP ASM policies from files. Deploy policies to BIG-IP ASM devices. Export BIG-IP ASM policies to files in XML format.

The BIG-IQ Network Security Panel The BIG-IQ Network Security panel represents the BIG-IP AFM module and it enables the BIG-IP administrator to centrally manage firewall configuration including discovering, importing, editing and deploying new changes. This simplifies logistics as you do not need to log on to each BIG-IP AFM and perform these changes. It can all be done from the same management interface. In summary, you will be able to perform the following tasks: ▪ ▪ ▪ ▪ ▪ ▪

Management of shared objects (address lists, port lists, rule lists, policies and schedules). L3/L4 firewall policy support which includes staged and enforced policies. Firewall audit logging to track every firewall policy change and event. Rolebased access control. Multiuser editing through a locking mechanism. Deploying configurations from snapshots and the ability to preview differences between snapshots.

The BIG-IQ Access Panel Providing remote access for your employees has become standard for most businesses. However, depending on what department or country you originate from the policy might be different. The BIG-IQ Access panel represents the BIGIP APM module and helps you centrally manage up to 100 BIG-IP APM instances, enabling you to import, compare, edit and update the policies of multiple APM devices.

708 708


The BIG-IQ Access panel provides you with a dashboard that gives you a holistic view of the network health and helps you to detect trends. This will make it easier for you to gauge your current policies and detect weak points. In summary, you will be able to perform the following tasks: ▪ ▪ ▪ ▪ ▪

Push policy updates to BIG-IP APM devices from a central location. Generate extensive reports. Compare policies. Backup and restore images. Generate logs and reports for BIG-IP APM and Secure Web Gateway (SWG)

BIG-IQ Device and System Management For the 201 exam you will need to understand how to manage your BIG-IP devices from a centrally managed system. In order to make sure you have this knowledge, we’ll cover both the Enterprise Manager and the BIG-IQ. Therefore, in the following sections we’ll go through how you perform the same features Enterprise Managers provides but on the BIG-IQ device.

Installing Required BIG-IQ System Components – Updating the REST Framework In order for the BIG-IQ to manage your BIG-IP devices you will need to make sure that the managed devices have an up-to-date REST Framework. The REST Framework is a set of components used by the BIG-IQ system in order to communicate and retrieve data from the managed devices. If the managed device does not have an up-to-date REST Framework the device discovery will fail. This can be manually updated by using the following instructions: When running the installation script, the traffic management interface (TMM) on each BIG-IP device will restart. This will have an impact on client traffic passing through the BIG-IP device. Therefore, please make sure you do this during a planned service window. 1. 2. 3.

Launch a terminal client such as PuTTY and SSH to the BIQ-IQ on port 22. Log in using the root credentials. Establish an SSH trust between the BIG-IQ and the BIG-IP device by typing the following command:

ssh-copy-id root@[BIG-IP Management IP Address] This step is optional. However, if you do not run this command you will be forced to enter the root credentials multiple times. 4.

Navigate to the folder where the installation script resides by typing the following command:

cd /usr/lib/dco/packages/upd-adc

709 709


5.

Run the installation script by typing the following command:

./update_bigip.sh –a admin –p [password] [BIG-IP Management IP Address] Where [password] is the administrator password for the BIG-IP device. 6.

When the installation script has successfully completed, revoke the SSH trust by typing the following command:

ssh-keygen –R [BIG-IP Management IP address] Device Discovery As with Enterprise Manager, before you can start managing your devices you will need to add the devices to the BIGIQ. In order to add the devices please use the following instructions: 1. 2. 3. 4.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on Device Management. Make sure you are currently in the Inventory view. Click on the BIG-IP Devices tab and then click Add Device.

5. 6. 7. 8.

In the IP Address box, enter the IP address of the BIG-IP device. In the User Name box, enter the user name of an account with admin credentials. In the Password box, enter the password of that user account. In order to add the device into a DSC (Device Service Clustering) group select one of the following: a. For an existing DSC group, select Use Existing and from the list select the DSC group that you would like to add the device to. b. For a new DSC group select Create New and type a name for the DSC group. 9. When done, click Add. 10. You will be prompted with a pop-up screen stating that you can now mange this device. If you would like to manage the device configuration you will need to select each licensed service configuration. Select each module you have provisioned on the BIG-IP device and click Discover.

710 710


11. When done, the new BIG-IP device will be added to the BIG-IP Devices List. You might get prompted to create a snapshot of the current configuration before importing the service configuration. Do this by clicking Complete import tasks. 12. Under the Device > Services tab, add a checkbox to the Create a snapshot of the current configuration before importing on each service configuration you have selected and click Import. 13. Head back to the Devices list by clicking the arrow button.

License Management The License Management on the BIG-IQ is a bit different than on the Enterprise Manager. On the BIG-IQ, the license management is used to distribute and manage the licenses for BIG-IP VE editions and not handle the licenses for the appliances.

As mentioned in the beginning of this book, the BIG-IQ is still under heavy development and this might change in the future. Using the BIG-IQ, you can revoke licenses and assign to another BIG-IP VE device when a device is no longer needed. This keeps the operating costs fixed and allows for some very flexible provisioning options. There are three types of licenses: ▪

Pool Licenses – These licenses are purchased once and you assign them to a number of concurrent BIG-IP VE devices defined by the license. These licenses do not expire.

Utility Licenses – These licenses are purchased as you need them and billed at a specific interval (hourly, daily, monthly or yearly).

Volume Licenses – These licenses are prepaid for a fixed number of concurrent devices for a specific period of time.

711 711


BIG-IP System Software Upgrades Uploading Software Images Like the Enterprise Manager, before you are able to install software images or hotfixes on managed devices you will first have to download them from downloads.f5.com and upload to the BIG-IQ. In order to upload the Software Image to the BIG-IQ please use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on Device Management. Make sure you are currently in the Operations view. In the navigation pane go to Software Management > Software Images. On the Software Images page, click Upload Image. Click on the Choose File… Browse to the file and click Open. When done, click Upload. When uploading an image to the BIG-IQ you cannot navigate away from that page. Therefore, wait for the upload to finish. 10. When the upload is complete you should be presented with the Software Images list.

Performing a Managed Device Install After you have added your software images to the BIG-IQ you are ready to perform a Managed Device Install. To do this please perform the following instructions: 1. 2. 3. 4. 5. 6. 7. 8.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on Device Management. Make sure you are currently in the Operations view. In the navigation pane go to Software Management > Software Installations. On the Software Installations page, click on Managed Device Install. In the Software Image list, select the image you would like to install. In the Name box, enter a name of the software installation in order to identify this installation. In the Options section you can select the following options: a. If you want to just copy the image and install at a later time, click on the Pause after the software image has been copied to devices box. b. If you want to copy and install the image but wait to reboot at a different time, click on the Pause for reboot confirmation box. 9. Click on the Add/Remove Devices button to add your managed devices. Move the managed devices from the Available pane to the Selected pane. Then click Apply. 10. In order to select the volume where BIG-IQ should install the software image, choose between the following options: a. Select an existing volume – Click on the Select.. box and from the list select the volume you wish you wish to install the software image. b. New Volume – In order to install the image on a new volume, click on the New Volume box and write the name of the new volume. 11. When done you can start the process by clicking Run or save the task by clicking Save. When saving a task you will be able to run it at a later time.

712 712


12. When clicking Run, the status of the installation will be presented under the Device Status pane. 13. When the task goes into a pause stage, resume the task by clicking Continue. To cancel the task, click Request Cancellation.

713 713


Rebooting Managed Devices 1.

2. 3. 4. 5. 6.

If you need to manually reboot a managed device into a different partition, please use the following instructions: Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on Device Management. Make sure you are currently in the Inventory view. In the navigation pane, go to BIG-IP Devices. On the BIG-IP Devices page, click on the device you wish you to reboot. Under the Properties of the device, in the Boot Location list, select the partition you would like to boot into and click Reboot.

UCS File Backup and Restoration Like the Enterprise Manager, the BIG-IQ can also create UCS archives on a regular interval or instantly. Like previously mentioned, backing up your BIG-IP devices is very important in order to be able to restore managed devices that have been replaced through an RMA or restore the configuration to a previous state.

Creating an Instant Backup In order to create an instant backup of one or more managed devices, please use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on Device Management. Make sure you are currently in the Operations view. In the navigation pane go to Back Up & Restore > Backup Schedules. Click on the Back Up Now button. In the Name box, type a name to identify this backup job. In the Description box, type an optional description of the backup job. In the Private Keys box, select whether you want to include the private keys in the UCS archive by clicking the box.

714 714


9. 10.

11. 12. 13.

In the Encryption box, select whether you want to encrypt the UCS archive by clicking the box. When selecting it you will be asked to provide a password. Under Local Retention Policy select how long you want the BIG-IQ to store the UCS archives. You have the following options: a. Never Delete – This will save the UCS archive indefinitely. This is the same as pinned archives on the Enterprise Manager. b. Delete local backup copy … day after creation – Here you can specify how many days after the creation that the BIG-IQ should store the UCS archive. Under the Devices area, select the Group of devices or Individual devices that you want to perform the backup on. Move the selected devices from the Available pane to the Selected pane. When done, click Start. View your backup under Back Up & Restore > Backup Files.

Creating Scheduled Backups In order to create scheduled backups, please use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

11.

12. 13.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on Device Management. Make sure you are currently in the Operations view. In the navigation pane go to Back Up & Restore > Backup Schedules. Click on the Schedule Backup button. In the Name box, type a name to identify this backup job. In the Description box, type an optional description of the backup job. In the Private Keys box, select whether you want to include the private keys in the UCS archive by clicking the box. In the Encryption box, select whether you want to encrypt the UCS archive by clicking the box. When selecting it you will be asked to provide a password. Under Local Retention Policy select how long you want the BIG-IQ to store the UCS archives. You have the following options: a. Never Delete – This will save the UCS archive indefinitely. This is the same as pinned archives on the Enterprise Manager. b. Delete local backup copy … day after creation – Here you can specify how many days after the creation that the BIG-IQ should store the UCS archive. In the Backup Frequency list, select how often you want to perform the backup. You have the following options: a. Daily – Select a start date and time for when you want to run the backup job. The default end date will be set to No End Date but this can be changed to a specific date. b. Weekly – Select which days of the week the backup should be run. You also need to select the Start and end date along with the specific time the backup should be run. c. Monthly – Specify the day of the month the backup should be run. This backup frequency also requires you to specify a date and time for when the backup jobs should be run. Under the Devices area, select the Group of devices or Individual devices that you want to perform the backup on. Move the selected devices from the Available pane to the Selected pane. When done, click Save.

715 715


Restoring a UCS File Backup Sometimes you need to restore a UCS archive. In order to this, please use the following instructions: 1. 2. 3. 4. 5. 6. 7.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on Device Management. Make sure you are currently in the Operations view. In the navigation pane go to Back Up & Restore > Backup Files. Add a check to the checkbox of the UCS archive you wish to restore. When you have selected all of the UCS archives you wish to restore, click Restore. You will be prompted with a pop-up window warning you about the restore and that it will overwrite the configuration of the device. Confirm that you understand the consequences by clicking Restore.

716 716


Monitoring and Alerts For the BIG-IQ, the Alerts section is located under the System Management panel rather than the Device Management panel. All of the alerts are collected in a list and using the Edit button you can Enable or Disable an Alert but also adjust the threshold for specific checks. When migrating from Enterprise Manager you will discover that you are a bit limited since you cannot configure which events should email or send SNMP traps like you can with the Enterprise Manager. Again, the BIG-IQ is under heavy development and this chapter is based on version 5.1 which is the most up to date version as of this book’s writing. F5 is constantly improving BIG-IQ and this feature might be included in future updates.

Configuring BIG-IQ to Work With SNMP Before you can configure the BIG-IQ to handle SNMP alerts you will need to add the following configuration objects: ▪ ▪ ▪

Set up an SNMP Agent Configure SNMP Access Specify settings for the SNMP Trap

717 717


Configuring SNMP Agent for Sending Alerts In order to configure the SNMP Managers to collect data from the BIG-IQ you will need to set up an SNMP Agent. In order to this, please use the following instructions: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on System Management. Make sure you are currently in the Inventory view. Expand the SNMP Configuration section and click on SNMP Agent. In the top right corner, click Download MIB in order to download the MIB package that can be imported into your SNMP manager (IT Surveillance System). Click Edit in order to modify the Contact Information and Machine Location. In the Contact Information box, type the contact information for the person or team that is responsible for the SNMP Administration. In the Machine Location box, type the location of the BIG-IQ so that the SNMP Administration team can know from which location the BIG-IQ is reporting from. When done, click Save to save the configuration. For the SNMP Access – Client Allowed List, click Add. In the Addresses/Networks and Mask boxes, type the IP address or Networks and the Netmasks that the SNMP manager is allowed to access. To add another one, simple click the plus sign ( + ) and type the additional information. When done, click Save to save the configuration.

Now when you have configured an SNMP Agent you can go ahead and create the SNMP Access and SNMP Traps. In order to do this, please use the following instructions:

Configuring SNMP Access for Version 1 and 2C 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on System Management. Make sure you are currently in the Inventory view. Expand the SNMP Configuration section and click on SNMP Access (v1, v2C). Click on the Add button. In the Name box, type the name of this SNMP access. From the Type list, select the format for the IP address (IPv4 or IPv6). In the Community box, type the community string (password) for access to the MIB. From the Source list, select All or select Specify and type the source address that has access to the MIB. In the OID box, type the object identified (OID) that you want to associate with the user. From the Access list, select one of the following options: a. Read Only – This user can only view the MIB b. Read/Write – This user can view and modify the MIB 12. When done, click Save to save the configuration.

Configuring SNMP Access for Version 3 1. 2. 3. 4.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on System Management. Make sure you are currently in the Inventory view. Expand the SNMP Configuration section and click on SNMP Access (v3).

718 718


5. 6. 7. 8.

9. 10.

11. 12.

13.

Click on the Add button. In the Name box, type the name of this SNMP access. In the User Name box, type the user name of the SNMP Manager. From the Type list, select which authentication protocol that you would like to use. You can select one of the following options: a. MD5 b. SHA c. None In the Password and Confirm Password boxes, type the password that is used to access the MIB. If you want to encrypt the SNMP traffic you can add this by selecting an encryption standard from the Protocol list. You can select one of the following options: a. AES b. DES c. None In the OID box, type the object identified (OID) that you want to associate with the user. From the Access list, select one of the following options: a. Read Only – This user can only view the MIB b. Read/Write – This user can view and modify the MIB When done, click Save to save the configuration.

Configuring SNMP Traps 1. 2. 3. 4. 5. 6. 7.

Open up a browser session to the BIG-IQ and login using the admin credentials. Click on the panel list and click on System Management. Make sure you are currently in the Inventory view. Expand the SNMP Configuration section and click on SNMP Traps. Click on the Add button. In the Name box, enter the name of the SNMP Trap configuration. From the Version list, select the version of SNMP that you would like to use. You can select one of the following options: a. V1 b. V2C c. V3 – This option will prompt for additional information. 8. In the Community box, type the community string (password) for access to the MIB. 9. In the Destination box, type the IP address of the SNMP Trap destination. 10. In the Port box, type the port of the SNMP Trap destination. 11. When done, click Save.

SSL Certificate Monitoring As with the Enterprise Manager, the BIG-IQ can also be used to monitor SSL certificates. This is configured by default and you can adjust the threshold of this alert by going to System Management > Inventory > Alerts > Edit. The default value is 30 days. Whenever a certificate has passed the 30-day threshold it will present a yellow status icon and when it has expired it will present a red status icon. In order to view the Certificates status please go to Device Management > Operations > Certificate Management.

719 719


Status Icon

Expiration Status

(Red)

This icon indicates that the certificate has expired. If this certificate is used for client traffic, then the user will receive a certificate warning.

(Yellow)

This icon indicates that the certificate expires in 30 days or less. It is still valid but as an BIG-IP administrator you will need to renew the certificate before it expires. This icon indicates that the certificate is valid for at least 30 more days.

(Green)

Chapter Summary ▪

The SCF file is a flat text file that contains all the output from all of the different tmsh commands that has been utilised on the BIG-IP system, containing all of their values and attributes.

A User Configuration Set (UCS) archive saves all BIG-IP configuration files within one single zipped tar file. The UCS files are saved into the directory /var/local/ucs using the extension .ucs.

Since TMOS v.12x (including v11.5.x) F5 has changed their software release plan into what they called the Tick Tock Release Cycle.

F5 distributes their software images in ISO archive files which have the format extension *.iso. The hard drive has specific space where Software Images by default, should be stored and this is the /shared/images location.

The Enterprise Manager is a product that assists the BIG-IP administrator with handling administrative tasks of multiple F5 devices.

BIG-IQ is the new centralised management product that is out to replace Enterprise Manager.

720 720


Chapter Review 1. Name one of the main purposes of using a SCF file? a. b. c. d.

Replicate the configuration across multiple BIG-IP devices Restoration of a replaced BIG-IP system Scheduled backups Synchronise the configuration between multiple BIG-IP devices

2. What will happen if you restore a BIG-IP system using a UCS archive from a different device? a. b. c. d.

The certificate key pairs will not work The configuration will fail if you have not provisioned the system before adding the UCS archive The license will fail You will need to validate the configuration before installation

3. What command can be used in order to load a UCS archive without the license? a. b. c. d.

tmsh load /sys ucs post-install tar -xvf /var/local/ucs/[ucs file] tmsh run /sys ucs no-license tmsh load /sys ucs no-license

4. Is it possible to install a new TMOS version on the currently running partition? a. b.

Yes No

5. What are you sometimes required to do prior to an upgrade? a. b. c. d.

Create a UCS archive Re-activate the license Copy the configuration to the new partition Re-build the Device Trust

721 721


Chapter Review: Answers 1. Name one of the main purposes of using a SCF file? a. b. c. d.

Replicate the configuration across multiple BIG-IP devices Restoration of a replaced BIG-IP system Scheduled backups Synchronise the configuration between multiple BIG-IP devices

The correct answer is: a The two main purposes of the Single Config File (SCF) is to replicate the configuration across multiple BIG-IP devices or when you want to migrate the configuration from one device to another. 2. What will happen if you restore a BIG-IP system using a UCS archive from a different device? a. b. c. d.

The certificate key pairs will not work The configuration will fail if you have not provisioned the system before adding the UCS archive The license will fail You will need to validate the configuration before installation

The correct answer is: c When restoring an UCS archive on a BIG-IP system, it is very important to consider how you handle the licensing. Since the UCS archive also contains the licensing file, when the UCS archive is restored, by default it will also restore the licensing file. The BIG-IP license is associated with the specific hardware on which the dossier is generated. This means that when restoring the UCS archive on a system that has another serial number the BIG-IP license will not match, causing problems. 3. What command can be used in order to load a UCS archive without the license? a. b. c. d.

tmsh load /sys ucs post-install tar -xvf /var/local/ucs/[ucs file] tmsh run /sys ucs no-license tmsh load /sys ucs no-license

The correct answer is: d Install a UCS archive without the license file using the tmsh command: tmsh load /sys ucs no-license. This command is very useful during an RMA process. You will receive the full configuration but no license file.

722 722


4. Is it possible to install a new TMOS version on the currently running partition? a. b.

Yes No

The correct answer is: b Whenever you install a new software image or hotfix, you need to do this from an active boot location and specify a non-active boot location. This means that in order to install new software images on a BIG-IP system you will need to have at least two boot locations. 5. What are you sometimes required to do prior to an upgrade? a. b. c. d.

Create a UCS archive Re-activate the license Copy the configuration to the new partition Re-build the Device Trust

The correct answer is: b In the file /config/bigip.license there is a line called the Service Check Date. This date is the same as when you previously licensed your BIG-IP system or when your service contract for the device expires. Whenever a BIG-IP device boots into a specific version, the Service Check Date is compared to the License Check Date for that particular BIG-IP version. If the Service Check Date is older than the License Check Date the system initialises but the configuration is not loaded. In order to load the configuration, the Service Check Date needs to be updated. This is done by performing a re-activation of the license. Doing this prior to an upgrade will save you the headache of this problem and make sure everything goes smoothly.

723 723


Index Automatic License Activation, 113 awk, 439

A Acceleration Policy Editor, 433 Access Control List (ACL), 528 Access Policy Manager (APM) Module, 61 active boot location, 671 Active Monitoring, 202 Active-Active Redundancy, 374 Activity LED Indicator, 485 Address Check Monitor, 205 Administrative Partitions, 428 Administrator, 433 Advanced Firewall Manager (AFM) Module, 62 Advanced Shell, 415 Affinity, 260 Alarm LED Indicator, 486 alert.conf, 486 Alias Address, 209 Always On Management (AOM), 45, 47, 455 Always Send Cookie, 266 Amazon Web Services (AWS), 59 Analytics, 645 Analytics Profile, 241 Ansible, 72 AOM Command Menu, 457 Appliances, 49 Application (Services) Profiles, 240 Application Acceleration Manager (AAM) Core Module, 62 Application Acceleration Manager (AAM) Full Module, 63 Application Check Monitors, 206 Application Editor, 433 Application Security Editor, 433 Application Security Manager (ASM) Module, 63 Application Services Proxy (ASP), 72 Application Visibility and Reporting (AVR), 64, 645 Archive Files, 658 AskF5, 81 asymmetric routing, 332 Auditor, 434 auto-complete, 419 Auto-Failback, 373

724 724

B base registration key, 113 Baseboard Management Controller (BMC), 44 baud rate, 110, 455 bi-directional, 322 BIG-IP Hardware Platforms, 48 BIG-IP Virtual Edition (VE), 59 bigip.conf, 423, 425 bigip_base.conf, 423, 426 bigip_gtm.conf, 426 bigip_user.conf, 426 BIG-IQ, 708 BIG-IQ Centralised Management Product, 65 BIG-IQ Cloud & Orchestration Product, 66 bigstart restart, 436 Bigsuds, 72 binary configuration, 423 BIND, 536

C Carrier Grade NAT (CGNAT) Module, 66 Centralised Management Infrastructure (CMI), 361 Certificate Authority (CA), 295, 307 Certificate Manager, 433 Certificate Signing Authority (CSA), 355 Certificate Signing Request (CSR), 295 Change (Port Translation), 345 Client SSL Profile, 303 Cloud - AWS, 73 Cloud - Azure, 73 Cloud - GCP, 73 Cloud Foundry, 72 cluster, 372 Clustered Multi-Processing(CMP), 46 CodeShare, 465 Command Completion Feature, 418


Command History Feature, 421 command line interface (CLI), 414 commit ID, 361, 362 Common Partition, 430 components, 416 Connection Mirroring, 385 Connection Reaping, 437 Connection Table, 17, 437 Containers, 72 Content Check Monitors, 207 Context-Sensitive Help, 419 Control Plane Functions, 549 Cookie Hash, 269 Cookie Insert, 264 Cookie Passive Method, 268 Cookie Persistence, 264 Cookie Rewrite Method, 266 Core Files, 627 cpcfg, 680 cs-client-addr port, 438 cs-server-addr port, 438 Cumulative Hotfixes, 666 cURL, 520

D Dashboard, 636 Data Groups Lists, 462 DDoS Hybrid Defender (Herculon), 71 Default Monitor, 198 Destination Address Persistence, 271 DevCentral, 82 Device Group Communication, 383 Device Groups, 358 Device Identity, 357 Device Service Clustering (DSC), 356 Device Trust, 355 Device Utilisation Score, 366 DHCP Relay Virtual Server, 150 dig, 536 Disabled (pool member/node), 561

725 725

Discovery Process, 683 Disk Management Process, 669 DNS (formerly Global Traffic Manager (GTM)) Module, 67 Dynamic Load-Balancing, 163 Dynamic Ratio, 171

E Edge Gateway Product, 67 Egress Drops, 511 End of Software Development (EoSD), 668 End of Technical Support (EoTS), 668 End User Diagnostics (EUD), 47, 481 Enterprise Manager, 67, 683 entry ID, 421 Escalation Methods, 630 Event Declarations, 459 Exam Blueprints, 83

F F5 Professional Certification Program (F5-PCP), 78 F5 Silverline DDoS Protection, 70 F5 Silverline Web Application Firewall, 70 F5 University, 83 F5 Wireshark Plugin, 603 Facilities, 492 Failover Options, 380 Failure Interval, 203 Failures, 203 FallBack Host, 177 Fast HTTP Profile, 143 Fastest, 164 FastL4, 141 Firewall Manager, 433 floating Self-IP address, 359 Flow Control, 502 Force to Standby Mode, 363 Forced Offline (pool member/node), 562 forceload, 427 Forwarding IP Virtual Server, 147 Forwarding Layer 2 Virtual Server, 148 Fraud Protection Manager, 433


Full Application Proxy, 73 Full Restore, 663

G Gateway Failsafe, 383 Gateway ICMP Monitor, 205 Gateway Pool, 383 Guest, 434

K Key Based Authentication, 452 Kubernetes, 72

H HA Capacity, 366 HA Groups, 372 HA Load Factor, 366 HA Order, 371 HA Table, 380 Hard Disk and Boot Locations, 669 Hardware Failover, 383 Hash Persistence, 271 Health Monitors, 195 Herculon, 59 Hide-NAT, 322 Host Management Subsystem (HMS), 45, 47 Host Monitoring, 73 httpd, 521

I iApps, 465 iControl REST Software Development Kit (F5-SDK), 72 Ingress Drops, 510 Initial Setup, 108 Intelligent Platform Management Interface (IPMI), 45 Interface Naming Convention, 500 intermediate CA, 308 Internal Virtual Server, 151 Internet Content Adaption Protocol (ICAP), 151 Interpreting Log Files, 638 Intervals & Timeouts, 195 IP Intelligence Service, 68 iQuery, 384 iQuery Ports, 526

726 726

iRule Components, 458 iRule Editor, 464 iRule Events, 460 iRule Manager, 433 iRule Wiki, 465 iRules, 457

L LCD panel, 109, 484 lcdwarn, 488 Least Connections, 163 Least Sessions, 165 Legacy Version Numbering Schema, 666 Link Aggregation Control Protocol (LACP), 507 Link Controller Product (& Module), 69 Link Layer Discovery Protocol (LLDP), 499 Listener Processing Order, 550 Load Aware Failover, 366 Load Balancing, 73 Local Traffic Objects Dependencies, 137 Local Traffic Summary, 222 Log Files, 490 Long-Term Stability Release, 667

M MAC Masquerading, 360 Maintenance Operating System (MOS), 45, 47 Maintenance Software Versions, 666 Major Software Versions, 666 Management Interface, 499 management port, 108 Manager, 433 Managing Pool Members, 557 Managing Virtual Servers, 552 Manual License Activation, 114 Manual Resume, 212


Marathon, 72 Master Control Program (MCP), 361 Match Across Pools, 280 Match Across Services, 278 Match Across Virtual Servers, 280 mcpd, 361 mcpd.bin, 423, 426 mcpd.info, 423, 426 MD5 Checksum, 30, 673 Member Specific Monitor, 201 Member vs. Node, 158 Message Routing Virtual Server, 153 Minor Software Versions, 666 Mirroring Ports, 526 MobileSafe Product & Service, 69 modules, 416 Monitor Reverse Option, 212 Monitoring Certificates, 706 Monitoring Port Exhaustion, 347

N NAT Address, 325 Network Address Translation – NAT, 322 Network Address Translation and Port Address Translation (NAPT), 329 Network Components Hierarchy, 497 Network Failover, 384 Network Map, 223 Network Time Protocol (NTP), 542 No Access, 434 Node Default Monitor, 201 Node Specific Monitor, 201 Nodes, 132 non-active boot location, 671 non-floating Self-IP address, 359 nslookup, 534 ntpq, 544

O Object State, 216 Object Status, 215

727 727

Object Status Hierarchy, 217 Observed, 171 OneConnect, 76, 564 Opening a support case with F5 support, 620 OpenStack, 72 Operator, 434 Operators, 459 Origin Address, 325

P Packet Based Proxy, 75 Packet Captures, 572 Packet Filters, 528 Packet Processing Order, 549 Parent ID, 469 Partial Restore, 663 Partition Access, 429 Passive Monitoring, 202 Path Check Monitors, 208 Peer Authorities, 355 Performance Check Monitors, 208 Performance HTTP Virtual Server, 143 Performance Layer 4 Virtual Server, 141 Performance Monitors, 195 Performance Statistics, 27, 612, 614 Performing a Failover, 496 Persistence, 260 Persistence Mirroring, 385 Persistence Profiles, 240 Personal Information Exchange Syntax #12 (PKCS#12), 296 Ping, 518 Pinned UCS Archives, 694 Policy Enforcement Manager (PEM) Module, 69 Pool Members, 132 Pool Monitor, 201 Pools, 132 Port Exhaustion, 344 Port Lockdown, 523 Power LED Indicator, 485 Predictive, 171 Preserve (Port Translation), 345 Preserve Strict (Port Translation), 345


Priority Group Activation, 172 Profile Types, 239 Protocol Profiles, 240 Public Key Infrastructure (PKI), 308

Q QKview, 622 Qualitative observations, 627 Quantitative observations, 627

R Ratio, 160 Ratio Least Connections, 167 Ratio Sessions, 165 Receive String, 207 regular expression (regex), 419 Reject Virtual Server, 150 Release Notes, 668 Remote Authentication, 23, 540 Remote Server Authentication Profiles, 241 Request Adapt profile, 151 Resource Administrator, 433 Resource Provisioning, 115 Response Adapt profile, 151 Response Time, 203 REST Framework, 711 Retry Time, 203 Role Based Access Control (RBAC), 428 root CA, 308 Rotating Archives, 692 Round Robin, 160 Route Domain, 19, 468 Route Domain ID, 469 Rule Commands, 460 running configuration, 423

S SCP, 449 Secure Network Address Translation, 329

728 728

Secure Web Gateway (SWG) Module & Websense Cloudbased Service, 70 Send ICMP Error on Packet Reject, 531 Send String, 207 Server SSL Profile, 304 Service Check Date, 671, 674 Service Check Monitors, 210 Session Initiation Protocol (SIP), 153 Sessions, 259 Severity Levels, 621 SFTP, 449 Shutting Down and Restarting the BIG-IP System, 436 Silverline Cloud-based Service, 70 Simple Monitoring, 202 Single Config File (SCF), 658 Single Node Persistence, 272 Slow Ramp Time, 211 SNAT Auto Map, 342 SNAT listener object, 339 SNAT Mirroring, 385 SNAT Pool, 340 SNAT Translation List, 339 snmp_dca, 171 snmp_dca_base, 171 Socket Pairs, 346 Software Images, 670 Source Address Persistence, 260 Source Network Address Translation – SNAT, 329 Spanning Tree Protocol (STP), 508 ss-client-addr port, 438 SSL Bridging, 304 SSL Dump, 626 SSL Offloading, 301 SSL Orchestrator (Herculon), 71 SSL Passthrough, 306 SSL Profiles, 240 ss-server-addr port, 438 Standard Virtual Server, 139 Stateful Communication, 11, 259 Stateful Failover, 384 Stateless Virtual Server, 151


Static Load-Balancing, 159 Status LED Indicator, 485 Stickiness, 260 Strict Isolation, 470 Strict Updates, 467 Subordinate Non-Authority (SNA), 356 switch over daemon (sod), 374 Sync-Failover Device Group, 359 Sync-Only Device Group, 359 System Interfaces, 499

T Taking Exams, 81 tcpdump Expressions, 582 tcpdump Output, 593 Telnet, 519 Terminal Access, 415 text configuration, 423 The Lab Architecture, 86 Tick Tock Release Cycle, 667 TMM Packet Drops, 511 TMM Switch Interfaces, 499 tmos, 416 TMOS Planes, 48 tmsh Keyboard Map Feature, 421 tomcat, 521 Traceroute, 519 Tracert, 519 Traffic Groups, 364 Traffic Management Microkernel (TMM), 45, 46 Traffic Management Operating System (TMOS), 44

729 729

traffic management shell (tmsh), 414 traffic-group-1, 365 traffic-group-local-only, 365 Transparent Monitors, 208 trunk, 372, 506 TurboFlexâ„¢, 43

U UCS Archives, 626, 659 Unhandled Packet Action, 530 unidirectional, 329 Universal Persistence, 272 User Manager, 433 User Roles, 432 Using Advanced Shell (bash), 436

V,W Web Application Security Administrator, 433 WebSafe Service & Module, 70 Weighted Least Connections, 170 What is BIG-IP?, 43 Why Become Certified?, 79 Why We Need SNAT, 329 VIPRION, 56 Wireshark, 598 virtual MAC address, 360 Virtual Servers, 133 VLAN Failsafe, 381 VLAN Groups, 504 VLANs, 502


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.