Tech Spark Issue 1 - Cloud Computing in Finance

Page 1

TECHNOLOGY SPARK, H1 2015

CLOUD COMPUTING IN FINANCE – THE AGE OF MATURITY

INSIDE THIS NEWSLETTER: CLOUD SPECIAL Are banks ready for Market Data in the Cloud?

EDITORIAL:

Pricing as a Service in the Cloud: the models

THE START-UP MODEL Excelian has its headquarters next to London’s Old Street, dubbed the “Silicon Roundabout” due to the huge number of technology start-ups that have settled in EC1V in the last 3 years (over 15,720 in 2013 alone). This has enabled us to rub shoulders with these new “start-up

However, it is interesting to note that they have to operate under very similar constraints as most of our clients within the Financial Services industry: low cost, short time to market and extensive competition. From a methodology and technology standpoint, start-ups have however been very quick at embracing Agile development and Cloud based infrastructures; It gives them the ability to quickly scale when required, continuously deliver to production, and customise features on their platforms based on factors like the time of the day and same route? Financial Services clients contacted me within the same week to discuss Cloud-based projects; what’s more, these projects were funded and had strong business sponsorship. In the past, many of our clients used to argue that cloud adoption would remain low in Finance stating that “the regulators would never allow it”, “we have sensitive data” etc. but these arguments have gradually all been answered and today there are now multiple forward thinking players who have started using Cloud-based solutions in Financial Services, we would like to share with you some of our experiences in this space, some crazy ideas (which are actually pretty sensible) and challenge the “normal” thinking to see how banks could leverage the “start-up way” of doing things.

Microsoft Azure for compute grids: a true competitor to Amazon AWS What about software packages for risk management? Beyond Cloud, how will the landscape continue to evolve? What does one of our partners think about Cloud? A Time of Unprecedented Change in Data Management

KEY TRENDS FROM THE 2014 HIGH PERFORMANCE COMPUTING MATURITY BENCHMARK THIS QUARTER’S NEW TECHNOLOGY: TERRACOTTA TECH EVENTS EXCELIAN HAS TAKEN PART IN: NG-EUROPE, THE EUROPEAN ANGULARJS CONFERENCE

We will be talking about Market Data in the Cloud, Risk Management in the Cloud, how Industry space and look forward to what is next in terms of Cloud. We will also share with that we’ve recently run around High Performance Computing as well as some content from a conference we recently attended. second one will be with you next quarter focusing on Big Data and NoSQL. Andre Nedelcoux, Partner, Head of Technical Consulting

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

1


Are the banks ready for Market Data in the Cloud? Almost exactly two years ago, Excelian was involved in a webinar discussing Reference Data in the Cloud. The general sentiment from the panel and the contributors was that, whilst it was an interesting concept and value could be seen from the proposition, the more general concerns around the use of a public Cloud such as security, resilience and ultimately, control and ownership, would prevent anything from happening. Compliance and Information Security functions, in particular, appeared to be the big blockers for moving forward with public cloud and the storage of Market and Reference Data in the Cloud. A lot can happen in two years - especially in the Investment Banking world. So has that much changed in the thinking around Market and Reference Data in the Cloud? Firstly, it is worth considering the perception shift that has happened in the use of public Cloud within the industry. As both the Cloud service offerings and the understanding of Cloud by certain parts of the IB organisations have matured, the general acceptance of cloud and the exploration of suitable uses for the Cloud has grown and evolved, particularly around the burst capabilities for Cloud in the Risk Calculation space. With increased acceptance, organisations and providers are starting to explore the potential of storing and hosting such data management products in the Cloud. Secondly, we can look at how the vendors are positioning themselves to meet the evolution of thinking around the Cloud. Two competitors in the Market and Reference Data world have approached the challenge from slightly difference angles: Bloomberg PolarLake offer two solutions that, although not Cloud-based, do provide for the management and cleansing of off-premise data and will deliver either a single, consolidated view of multi-vendor data, fully cleansed by their teams or in a more ‘raw’ format for cleansing within an organisation.

organisation to run its Data Management function closer to the consuming systems that also run in the Cloud. Both organisations are leading the way as the Investment Banking sector begins to embrace the changes of both Cloud and Data as a Service. It is a logical next step for organisations making use of the PolarLake Fully Managed Service to take the cleansed data directly into a private or public Cloud, making it available to the rest of the organisation in this way. Similarly with the Xenomorph solution, it would make sense to move all organisational data into a Cloud solution alongside the Market and Reference Data.

THE OBVIOUS BENEFITS OF HOLDING ALL YOUR DATA IN THE CLOUD MEANS THAT RISK CALCULATIONS MAKING USE OF THIS DATA CAN ALSO RUN IN CLOUD-BASED GRIDS, REDUCING THE AMOUNT OF NETWORK TRAFFIC AND CONSEQUENTLY INCREASING PERFORMANCE.

From a distribution perspective, it also becomes possible to distribute the data to multiple trading venues (London, New York, Tokyo) using Cloud-based replication mechanisms. And, by the way, why not mix “traditional” Market Data with social data available freely on the internet, to determine new trading strategies, for instance? Finally, what are the banks going to do next? There is currently a consortium of Tier I banks looking collaboratively at how to approach the management of Market and Reference Data. With increasing demands on the need for data and the realisation that the IP in the data is relatively low, the drive to reduce costs by implementing an outsourced model is becoming more urgent. Excelian’s view is that the outsourcing of Data Management (and consequently the data) is imminent and that Cloud solutions will play a big part in how such data is presented to these organisations and how it is subsequently used from within them in the future. The only real question is which of the big players will leap first?

Xenomorph has taken a different approach and, in partnership with Microsoft, now offer their TimeScape product in the Windows Azure Cloud. Data Management and Provisioning is all done off-premise and therefore will ultimately allow an

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

Ian King, Senior Principal Consultant, Head of Market and Reference Data

2


Pricing as a Service in the Cloud: the models Product valuation workloads for Pricing and Risk Management are, in general, compute intensive workloads which can be run in parallel using a Compute Grid middleware by being split into individual tasks needing only a well-defined sub-set of data. These tasks will, in general, not involve any client sensitive data (as most internal client IDs are meaningless) but will require large amounts of Compute; they are ideal when it comes to leveraging Cloud Computing resources. There are various possible deployment scenarios where Cloud can be used and Excelian has experience using all these models having worked with several clients in Financial Services: this is not science fiction, it is reality and it is happening now.

CLOUD BURSTING

In this scenario, only controlling components of the Grid (head nodes, broker nodes, data grids etc.) are kept on premise and all the Compute happens off-premise in the Cloud. This approach offers increased dynamicity and flexibility but is dependent on the workload (e.g. data intensive workloads will not always support this deployment model). Cloud providers now offer direct connectivity to their infrastructure through dedicated lines, which increases the versatility of this approach. The main benefit we have seen for our client using this approach has been to closely correlate the Compute demand curve with the cost curve and break the traditional three months’ lead time for new compute / two years’ depreciation cycle that most Financial Institutions have to face.

GRID IN THE CLOUD

Trading system Workstation Compute grid middleware Servers

Analytics

Analytics

Data management (cache, filesystem)

laaS VMs or PaaS instances

With one particular client Excelian has worked with, the main benefit has been to optimise the TCO of their Grid footprint and to avoid increasing the pressure on the in-house datacentres, which were almost running at capacity.

OFF-PREMISE COMPUTE

Risk system

Analytics

CLOUD

ON PREMISE

Trading system

Compute grid middleware

Workstation

Compute grid middleware Analytics laaS VMs or PaaS instances Storage inputs/ outputs

In this scenario, additional Cloud-based instances are added to the on-premise Compute Grid to add flexibility, the main purpose being to size the on-premise environment for average load instead of peak load. These additional resources can be added on a scheduled basis (e.g. every day at 8pm prior to the kick-off of the batch) or on a dynamic basis (e.g. when Compute demand goes beyond a threshold); the latter depending mainly on the capability of the Compute Grid middleware being used.

Workstation

Risk system

CLOUD

Servers

ON PREMISE

Trading system

CLOUD

ON PREMISE

Risk system

In this scenario, all the Compute Grid elements are deployed in the Cloud offering full flexibility. Data-intensive workloads can be executed in the Cloud, with data being kept in the Cloud and potential data aggregation phases happening off-premise in order to reduce the traffic between on and off-premise. Excelian has implemented a version of this architecture, leveraging Windows Azure as a Cloud platform. The scheduling happens using Microsoft HPC Server, packaged and deployed in the Cloud and accessible through a set of simplified REST APIs from on-premise. Excelian has also implemented this model on Amazon WS using Hadoop Elastic Map Reduce as a Compute Grid middleware. With a client Excelian has worked with, the main benefit has been to use simple turn-key solutions and has avoided a complicated upfront sizing exercise: only the required Compute was made available in the Cloud, based on the actual demand that was dependent on the uptake of a new business. For another client, this approach was leveraged to run back testing for models on a full production scale, which was impossible to do in-house without affecting production workloads.

laaS VMs or PaaS instances

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

Andre Nedelcoux, Partner, Head of Technical Consulting

3


Microsoft catching-up with Amazon in the Cloud space: analysing key new features of Microsoft Azure for compute grids Microsoft Azure has recently enhanced its service capabilities with features relevant for Financial Services use cases and we thought it would be interesting to look at them more in details. These features were highly anticipated by the community and now Azure Cloud Services can be considered as a strong contender for most off-premise Compute Grid deployments. Microsoft announced Virtual Networking enhancements and ExpressRoute connection general availability at TechEd North America in May 2014. In particular, the Multiple Site-to-Site connection feature allows multiple “on-premise” locations to connect to a single Azure Virtual Network and VNet-to-VNet connectivity which allows communication between the Azure Virtual Networks. In addition, thanks to ExpressRoute, these multiple organisational units can be incorporated into organisations’ WANs in a secure fashion. This gives room for new, more comprehensive spectrums of grid architecture designs, which have not been possible or feasible in the past.

THE FLEXIBILITY AND SCALABILITY OF THE AZURE CLOUD, WITH SIMPLICITY OF PROVISIONING AND DEPROVISIONING GRID NODES, ENABLES THE AUTOMATION OF GRID RESOURCES DEPLOYMENT.

The Grid Computing resources count, location, and performance capabilities could be set and modified according to real time requirements. In theory nothing stands in the way of automating changes, so the Grid Architecture could actually be modified according to the requirements, for instance ‘moving’ Compute nodes between EMEA and US regions once peak time at the EMEA region has ended.

DATA TRANSFERS AND CONNECTIVITY For Grid Computing purposes the most adequate connectivity solution is ExpressRoute private site-to-site connectivity over one of the exchange providers supported by Microsoft. The solution enables an organisation to utilise a private connection to the Azure Cloud at an Exchange Provider location. The ExpressRoute is an optimal solution due to its reliability, speed (up to 10GBps) and low latency.

THE HIGHEST POTENTIAL SECURITY LEVEL IS ASSURED THANKS TO ISOLATING THE TRAFFIC BETWEEN THE CUSTOMER’S PREMISE AND AZURE INFRASTRUCTURE. USERS ALSO BENEFIT FROM AVOIDING RISKS RELATED TO EXPOSURE TO THE INTERNET AND PUBLIC INFRASTRUCTURE.

This, combined with the default features of Azure Virtual Networks (i.e. logical isolation with control over the network given to a user) and subnets with private addresses, may be considered as an adequate level of security and isolation for running the Grid jobs. The Azure components IP addresses are configured by the customer and must be unique across an organisation and other Azure Virtual Networks. This requirement integrates Azure components into the organisation’s WAN and reduces potential issues with a Grid Management middleware platform and its Grid components communication. With guaranteed data throughput of 10GBps, the ExpressRoute is also adequate to support most of the Grid potential data transfer requirements. It is worth mentioning that throughput can only be guaranteed for the connection to Azure. However, at the time of writing, Microsoft cannot guarantee throughput for data transfers within and between the Azure regions at the moment. Depending

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

4


on Grid requirements this could be a serious design constraint, thus this issue requires reconsideration and a discussion with Microsoft when the exact data throughput requirements are known.

SECURITY The security and regulatory constraints are potentially the most delicate matters that are likely to go wrong while integrating Azure based components with on-site Grids. Undoubtedly, Microsoft takes all possible measures to position itself as a trustworthy organisation with a “top priority on security” mind-set. Generally speaking, the combination of the Virtual Networks and ExpressRoute provide good security measures. The isolation of environments that come with the solution is usually great.

IN ADDITION, THE ACCESS TO THE ENDPOINTS CAN BE SECURED WITH THE ADDITIONAL RULES, FOR INSTANCE IP ACCESS CONTROL LISTS AND ANY VM ACCESS PORT CAN BE CLOSED WHEN REQUIRED.

It is good to mention here that VM cannot send or receive any Layer-2 traffic and cannot snoop any traffic that is not destined to it.

A WORD ON UTILISATION AND A COST BREAKEVEN POINT In environments characterised by high utilisation levels, Azure could actually become significantly more expensive than running Grid in-house and could potentially bring down any business case for migration to the Cloud. The cost benefits are becoming visible for Grids under 60% utilisation and consequently would increase sharply for the Grids with lower utilisation. Obviously, the breakeven numbers would vary for specific environments and depends on the size of the Grid. It is worth noting that for the large, critical Grid installation, Microsoft has various pricing levels that could considerably bring the predicted costs down depending on a number of required resources, usage patterns, high availability, and disaster recovery requirements. With regard to the cost split within Azure itself, we have noticed that for large installations the majority of the Cloud deployment cost (up to 95% for grids over 15 000 nodes) was related to compute power. Bearing this in mind could be useful when thinking about costs versus scale because the small Grid costs of secure connection, storage and other related resources can have a considerable impact on the general cost landscape. Greg Walczak, Senior Consultant

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

5


What about software packages for Risk Management? Excelian conducted a Cloud Feasibility study for the Counterparty Credit Risk (CCR) system of an APAC Investment bank whose Risk was calculated using a software package with a Compute Grid component. Amazon was selected as the bank-wide Cloud partner, and so EC2 was the logical choice of service within AWS to utilise for this purpose whilst HPC Server was kept as the distribution middleware for consistency with the on-premise configuration. Firstly, Excelian created and paired together an elastic IP and a fixed network interface. A static IP was required for a consistent communication target and the mac address was required in order to be able to provision a license for the third party CCR vendor application. Excelian then provisioned a Windows VM and attached the IP/mac address pair. After that, an Active Directory domain was set up in the Cloud, as this is required for HPC Server to function. It was decided to create a standalone domain, rather than bridge the existing domain of the bank into the Cloud. This was a simpler approach that could be implemented faster and didn’t change the overall submission policies in effect with the on-premise Grid.

UNLIKE THE ON-PREMISE GRID, WHERE A SEPARATE FILE SERVER AND HEAD NODE ARE PROVISIONED, IT WAS DECIDED TO COMBINE THESE IN THE CLOUD.

helped significantly with the Cloud POC, enabling exposure of the same web API back to the bank network once the firewall was correctly configured. This resulted in a HTTPS connection to send jobs and an integration layer interfaced with the Grid in the Cloud. Excelian created AMI reference snapshots of the completed management node in the Cloud, so that this could be torn down and reprovisioned when necessary, including changing the “size” of the provisioned VM depending on expected load. Excelian also created a Compute node AMI, to provision as many Compute resources in the Cloud as required, with ease.

EXCELIAN WAS NOW READY TO RUN A SUITE OF TESTS, INCLUDING A PERFORMANCE COMPARISON BETWEEN THE CLOUD AND THE ON-PREMISE GRID. INITIALLY, THIS WAS DONE BY STAGING THE RELEVANT FILES IN THE CLOUD AND SUBMITTING ENTIRELY WITHIN THE CLOUD.

A full test to submit from a machine hosted on-premise into the Cloud was then performed, with everything working as anticipated. A ~30% slowdown was noted in single threaded performance between the on-premise hardware and the Cloud VMs. It was assumed this slowdown was caused by the forced use of hyperthreading in Amazon EC2 Windows instances. Overall, there were very few issues to overcome, and most of these were internal policy and security concerns rather than technical issues. These included things such as addressing security of connectivity between the bank network and the Cloud to ensure data was obfuscated and contained no sensitive information.

This was, again, partially for simplicity but also to help keep costs down. The on-premise servers were deemed to have an acceptably low load to not incur any performance issues combining them. This meant installing the third party vendor application license, HPC Server and also the internally built integration layer onto this machine. All relevant static configuration files were updated and uploaded to the Cloud as well.

The POC proved to be a success, with all of the goals being met. Excelian was able to ascertain with reasonable certainty that the largest of the overnight CCR batch processes could be completed at a Cloud cost of ~$40-60, which when compared, fared reasonably favourably vs. the cost of maintaining the on-premise hardware currently in place.

The internally built integration layer is a web service, which simply presents an API back to other internal applications. This actually

Mark Perkins, Senior Consultant

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

6


Beyond Cloud, how will the landscape continue to change The way to work without infrastructure is changing at an unprecedented rate. Anyone that is keenly interested in DevOps will understand agility in practice, but now it’s becoming even easier. Virtualisation technology is now mature and used at scale but as companies bleed every last drop of performance the question is raised - can anything else be done? Meet Docker.

Kubernetes can be used to push containers into a company datacentre or onto public Clouds: AWS, Azure, Rackspace etc. and then scale them up and down according to various ‘minion’ values.

IN PRACTICE Some organisations have really latched onto DevOps. Here in our own lab environments, we enable everyone to become a full stack developer. Being able to share containers and work with agility and start feeding data into “Dockerised” services means we have full control over every aspect of our runtime.

DOCKER Docker arrived as the ‘poster child’ for the Containerisation movement. A container-based approach, which disposes of the OS overhead, associated with VMs; it is fast, lightweight and holds a thin wrapper over applications. Wrap up a Cassandra node, SQL database, NGIX webserver into a container and the ability is then there to push, pull and commit the container images using the familiar concepts of source control (think Git, Mercurial). The key difference between Docker and virtualisation is performance: with Docker there is no need to emulate an entire OS. Instead it uses resource isolation features of the Linux kernel (LxC). Agility means Docker works well with DevOps, helping to enable flexibility and portability on where the application can run, whether on-premise, public Cloud, private Cloud, bare metal, etc.

HOW TO SCALE DOCKER; KUBERNETES? While Docker gives us a lightweight and agile means of containerisation, there are still have hundreds or thousands of them to manage. With Kubernetes; designed by Google and leveraging their extensive experience; comes a solution that helps revolutionise the common thoughts about deploying and running apps at scale.

WITH THE WAVE OF OPEN SOURCE, BIG DATA AND INFRASTRUCTURE TECHNOLOGY THAT’S EXPLODED IN THE LAST 12 MONTHS, AGILITY IS KEY. IT’S BECOMING EASIER AND EASIER OR LESS DIFFICULT (!) TO WORK AT SCALE AND WITH CONFIDENCE.

It’s more common for us to be talking about Cassandra, Spark and Terabytes of data over many servers than the traditional SpringCluster with Oracle RAC. Cloud is now common place and our working methodologies are changing. I’ve always been amazed at the amount of ‘rinse and repeat’ in managing and deploying applications, it is about time these tools arrived. I’m not the only one breathing a sigh of relief! Just look at Google Trends.

KUBERNETES IS A SYSTEM FOR MANAGING CONTAINERISED CLUSTER APPLICATIONS ACROSS MULTIPLE HOSTS, PROVIDING BASIC MECHANISMS FOR DEPLOYMENT, MAINTENANCE, AND SCALING OF APPLICATIONS.

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

Neil Avery, Technical Consulting, Chief Technology Officer 7


What does one of our partners think about Cloud? A Time of Unprecedented Change in Data Management The prominence of data within Financial Markets has never been greater. Regulation, cost pressures and the creation of new business and operating models since the 2008-9 Financial Crisis, have pushed the issue of data up to the top of the business agenda for many financial institutions. I believe this will be good for the industry yet potentially very disruptive, due to the confluence of regulatory overload for existing players combined with the advent of new technologies such as Cloud and NoSQL databases. This is making it easier for new entrants to bring new services to market faster than ever before.

CONTROLLING THE FLOW I think it remains the case that data within many Financial Market institutions is analogous to water: Everyone needs it Everyone knows where to get it Nobody likes to share it Nobody is really certain of its source Nobody is quite sure where it goes to Nobody knows its true cost Nobody knows how much is wasted Everyone assumes it is of high quality You only ever know it has gone bad after you have drunk it

In response to the situation alluded to above, many institutions have now integrated the topic of data governance as a key function of the business. In doing so, new positions such as Chief Data Officer have been created to have somebody with executive responsibility for

data and its usage throughout the organisation. Data governance is still a relatively immature function within Financial Markets, however industry bodies such as the EDM Council are offering help through initiatives such as the Data Maturity Model (DMM) for objectively assessing an institution’s Data Management capability. Much remains to be done in my view, across all areas, but particularly in data quality and in understanding where data flows to within the business. Put a different way, there are not many large organisations that have a firm handle on the “What happens if we switch this data source off?” question as yet; something that deserves more attention if you are trying to control costs and understand the risks of Data Management.

REGULATORY EFFECTS Regulators are both contributing to the complexity of Data Management and helping the industry to address its challenges, both directly and indirectly. Obviously appropriate data standards would be of great benefit to the industry; something that the industry itself has been appalling at implementing under its own auspices. The regulatory-led Legal Entity Identifier (LEI) is now with us, however much its original aims of easier Counterparty Risk Aggregation remain as yet unfulfilled. The LEI and other initiatives, such the

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

8


BCBS 239 standards for Risk Data Aggregation and Reporting from the Basel Committee, are driving and enforcing change and I think show the way in which regulation can improve the efficiency of the industry. Just as important as prescriptive regulation about data are those indirect effects, for example resulting from requirements such as Credit Valuation Adjustment (CVA), that need up-to-date exposure information as a necessary part of the trading process.

SOCIALISING DATA Whilst many CDO and CTO’s are understandably very focused on the internal day-to-day pressures of operational and regulatory reporting, there is a great opportunity at the moment to be entrepreneurial and creative with data and how it is used. Looking at things internally first, then technologies such as Enterprise Social Networks and Processes such as crowd-sourcing can really change some of the internal economics of how the aims of data management are achieved.

GETTING EVERYONE INVOLVED IN CONTRIBUTING TO THE DATA QUALITY PROCESS CAN REDUCE COSTS, REDUCE TIME TO MARKET AND INCREASE EVERYONE’S ENGAGEMENT IN THE PROCESS.

In my experience, many Data Management projects are instigated out of one particular department where the needs and indeed expertise of downstream users, are not at first properly considered. Let’s get more people’s involvement and get data more social within the organisation!

EXTERNALISING DATA Another question to ask is whether any of the data you use internally might have value externally? Presented with the option, I have been surprised by how many companies both large and small have said that they have data they would like to be able to sell to others. The Financial Markets have numerous examples of data re-cycling, where in the worst cases, organisations are actually paying a supplier for some of the data that it generated itself. With the advent of Cloud and related NoSQL database technology, the technology barriers to direct data publishing, distribution, monetisation and partnering are lower than ever before. So obviously not all data used internally has value or is appropriate for external use, but the more entrepreneurial aspects of data management are certainly worth considering to see how best you can “sweat the asset” of the data you already own. So to conclude, in my view, now is the time to look at your data and your Data Management practices to see what should be turned inside-out, upside-down and any which way it can to increase efficiency, reduce costs and potentially even make some money! Brian Sentence, Founding director and CEO, Xenomorph.

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

9


Key trends from the 2014 High Performance Computing maturity benchmark As the premier High Performance Computing (HPC) consultancy in the Financial Services sector, Excelian conducts a technology benchmark on the adoption of HPC technologies in the market every year. This survey is free and conducted in an anonymous fashion, the full and detailed results being shared exclusively with the 17 Tier I and Tier II investment banks who took part this year.

KEY HIGH-LEVEL TRENDS EMERGING FROM THE SURVEY ARE AS FOLLOWS: The market share for Compute Grid middleware continues to shift significantly, IBM Platform Symphony becoming the prevalent solution for large scale installations: the solution is now deployed on more than 50% of the cores on the market. This trend will probably accelerate in the next 18 months, specifically following the uncertainty around the future of TIBCO who has been acquired by an investment fund. There is definitely space for a new player in this market, which could either be Microsoft HPC Server, currently used by a few players on the market, or maybe an Open Source competitor, the likes of Apache Storm for instance offering interesting possibilities when it comes to distributing calculations. 25% of the banks surveyed are now leveraging multiple types of resources as part of their Compute stack, the resources being scavenged nodes (workstations/servers), Cloud-based nodes or GPU accelerator nodes. The constant push towards higher cost efficiency is the main driver here and making sure that expensive resources are only used for SLA-critical workloads has become a key approach to optimising compute cost. GPU adoption continues to progress with 37% of the market using the technology as part of their stack (it was the case for 25% of the market 18 months ago). Xeon Phi hasn’t managed to seriously challenge NVIDIA’s GPUs: the initial performance challenges and the extended time to market have been relatively disappointing and probably explain the low level of adoption so far.

Compute stack. DataGrid solutions are still in use to optimise data distribution in the HPC environments with NoSQL solutions being used mostly as inbound/outbound data stores for the core compute engines. However, it is likely that they will progressively start to cannibalise the DataGrid market, mainly for cost reasons but also because it is becoming increasingly complex to maintain two different technologies dedicated to data management. At a high level, High Performance Computing solutions have entered a new area. The patterns and architecture for standard compute grid solutions are well known and understood, and Excelian described this Commodity model in the previous Grid maturity benchmark edition: a vendor-based middleware deployed across large Compute farms, shared at the enterprise level between applications which leverage pools of Commoditised Compute.

COMPUTE STILL REMAINS A KEY ASSET FOR MOST BANKS AND A SIGNIFICANT NUMBER OF BANKS WILL SEE THEIR COMPUTE ESTATE INCREASE BY 20 TO 30% AS REGULATORY REQUIREMENTS AND MORE SPECIFICALLY CREDIT VALUE ADJUSTMENT/CREDIT COUNTERPARTY RISK RELATED REQUIREMENTS CONTINUE TO FUEL THIS INCREASE.

The cost pressure that the industry is going through is, on the other hand, pushing technology innovation and stretches the model of the on-premise utility Grid: as described above, new hardware platforms are more and more prevalent, Cloud is being used more widely and alternative Open Source solutions are being used more and more aggressively. If you wish to participate in the survey, measure your Grid maturity against the rest of the market and get access to the full report of the study, please get in touch at marketing@excelian.com.

NoSQL solutions are starting to be adopted across the board in Investment Banking with 29% of the banks surveyed either using MongoDB, Cassandra or HDFS/Hadoop as part of their

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

Andre Nedelcoux, Partner, Head of Technical Consulting

10


This quarter’s new technology: Terracotta Software AG’s Terracotta Universal Messaging (TUM) is a data exchange middleware with a highly optimised Java messaging server backend capable of supporting many thousands of concurrent users while ensuring high message throughput and low latency in any of the standard communication modes Unicast, Multicast and IPC. The major standards of JMS, MQTT, and HTML5 are all supported too, as are interfaces NHP, REST, and SOAP. A wide variety of delivery modes including Enterprise, Web, Mobile, and Embedded are also available. High-availability through clustering is supported and is easily configurable using the Enterprise Manager GUI. Security configuration, server-side and log monitoring are also made simple and transparent with the GUI. The possible data exchange options are via either Channels - pub/ sub to multiple consumers, or Queues - to a single consumer, or DataGroups - whereby membership of one or more groups determines messages received and producers are actively aware of and in control of group membership. Transactional semantics are possible for both channels and queues, ensuring guaranteed delivery. The main development APIs are in C#, C++, Java, and Python and a consistent API across the different publication modes makes the development of applications possible, which can seamlessly be switched between the various modes. An admin API is also available for developing bespoke admin type applications. Excelian recently used TUM at one of their clients as part of a broader FX e-trading platform build-out. The Java client-side API was used for developing a Java Spring server-side application, which publishes valid forward trade settlement, dates for allowable tenors and IMM dates to all end consumers via channels. Data messaging is done using lightweight Google Protocol Buffers. An internal Reference Data technology group publish holidays to a REPLY channel in response to a request for holidays per currency published to a REQUEST channel. The application consumes the holidays and first persists them to a so-called ‘back-up’ queue (depth one to ensure

only the last written is stored) for use when the holidays service is not available; it then proceeds to calculate valid forward dates and IMM dates per currency, and finally publishes output to channel for consumption by pricing servers and user GUIs. Three-server clustering has been configured in the production environment for ensuring high-availability of the service. Terracotta Quartz maven plugin is built into the application and used for precise scheduling of the trade date roll at NY 5pm. Another TUM application used the Java admin API. The service provided is an on-demand switch which, when run, modifies the currency pair based filter applied to joins between a source and destination FX pricing channels. The join filters are critical for determining the route FX pricing query messages published per currency pair take: whether it be the legacy Reuters TRM route or an in-house pricing server route.

IN ADDITION TO THESE TWO SAMPLE APPLICATIONS, TUM HAS BEEN USED THROUGHOUT THE FX E-TRADING PLATFORM AS THE MESSAGING MIDDLEWARE OF CHOICE FOR LIVE PRICING.

In conclusion, TUM is a reliable low-latency messaging middleware with an easy-to-use API allowing for rapid development. Java-style documentation and sample code covering every aspect takes the guesswork out of API usage and, as with our recent experience, contributed significantly to the speedy delivery of robust applications. One pain point experienced though, was that technical support in the Australia office is a little thin on the ground and the team had to wait several days for a response to a query. But the response, when it arrived, was detailed enough to resolve the issue without need for further follow-up.

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

Conrad Mellin, Principal Consultant

11


Tech events Excelian has taken part in: ng-europe, the European AngularJS conference Excelian recently attended the ng-europe conference in Paris. The conference was extremely popular selling out weeks before the event took place. This attests to huge popularity and interest in the Angular framework, with large organisations such as Barclays, Goldman Sachs, Morgan Stanley, Virgin, HBO and Netflix buying in, as well as over 160 Google applications using it. For those who haven’t taken a look at this framework before, it’s

ARIA support - The new ng-aria module provides support for adding ARIA attributes that convey state or semantic information about the application in order to allow assistive technologies to convey appropriate information to people with disabilities.

ANGULAR 2.0 AngularJS 2.0 will be geared toward evergreen browsers with little support for legacy browsers. Reason for this being that the foundation is built on ECMAScript 6. Of course Traceur will still be used to back port to ECMAScript 5. The main new features of ECMAScript 6 are classes and improved inheritance declaration - this finally brings types to Javascript! Angular 2.0 builds on top of ECMAScript 6 introducing a new component in the pipeline called AtScript which is similar to TypeScript. This will give Angular 2.0 the ability to use Types, Annotation and Introspection. So, this is the part where things get a little crazy... The following have been removed from the Angular framework: Modules (replaced by ES6 module) JqLite Ng-model $scope

definitely time to take a look! AngularJS is a powerful framework for dynamic web apps - it allows the user to extend HTML’s syntax to express the application’s components in a declarative way. Data binding allows changes in the user’s logic to be reflected in their view providing a clear separation of concerns. Angular also has a baked-in dependency injection mechanism that makes testing an app easy. For those who are already using Angular, here is the low-down on the revelations at NgEurope:

ANGULAR 1.3 IE8 is no longer supported - In their words: “Yes it’s a feature”. The team at Google feel like it is halting progression. The longer they support IE8, the longer large corporations will refuse to upgrade. Huge performance increases - Angular have made several macro performance improvements which drastically improves the performance. One way binding is finally here! - This type of binding gives the opportunity for huge performance gains when data is not very changeable. There are also new features allowing greater control when managing and evaluating the watch list.

controllers Seriously? Yes, seriously - Angular is moving to more of a componentbased model so Controller Logic will be moved inside the Directives.

TECHNOLOGY AND FRAMEWORKS TO LOOK OUT FOR: The Ionic mobile framework is gaining traction in the market. This framework for Android and IOS apps build on top of Angular providing the developer with Directives and themes to give apps a native look. The Firebase framework provides real-time backend as a service which allows the creation of apps. Save, Store and Update Data in real-time directly from the browser. The Google material specification was recently released suggesting best practices in UX for material design. This is currently being implemented by the Google in an Angular module called “Angular Material”.

Excelian Technology Spark, H1 2015 edition: Cloud computing in Finance – the age of maturity

Tushara Fernando, Consultant 12



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.