Independent Benchmark SQL-on-Hadoop Performance Presto, Impala, Hive and Spark SQL Cloudera and Hortonworks Q2 2016
Table of Contents
About Radiant Advisors Radiant Advisors is a leading strategic research and advisory firm that delivers innovative and relevant research and thought-leadership to transform today’s organizations into tomorrow’s data-driven industry leaders.
Benchmark Approach
3
Benchmark Summary
4
Benchmark Results
5
SQL Compatibility Index
10
Benchmark Analysis
12
Notable Considerations
18
Configuration
19
Closing
20
To learn more, visit www.radiantadvisors.com.
radiantadvisors.com | 2
Benchmark Approach With the rapidly evolving landscape and abundance of choices regarding SQL-on-Hadoop engines now available both in open-source and proprietary vendor options, it has become increasingly difficult to track current capabilities and make selections of what’s viable and optimal for use in Hadoop environments. Vendors publish their internal benchmark numbers in blogs and tech briefs, but this data is appropriately subject to skepticism because of their approach, not to mention objectivity. As an independent research and advisory firm, Radiant Advisors continually tracks and analyzes many of these reports. When vendors and organizations within our executive and advisory networks ask for our point of view on such reports, we often easily dismiss them as incomparable simply due to differences in hardware and cluster sizes. The beauty with Hadoop is that everything runs faster with more nodes – an easy way to overcome performance deficiencies. This underscores the importance and value of an independent benchmark. The analysis and findings in this report represent an independently executed benchmark based on the industry standard TPC-DS scenario by the Radiant Advisors engineering team. As the sponsor for this benchmark, Teradata provided the engineers remote access to a Teradata Hadoop Appliance at Teradata Labs for full, strict control of the servers. Together, Radiant Advisors and Teradata agreed on scope of testing for the latest versions of Presto, Impala, Hive and Spark SQL and file formats that were the most relevant to companies, data teams and analysts at the time of testing.
We focused on analyzing SQL performance response times as well as SQL compatibility and execution variability. We took this approach for relevancy based on industry needs observed from companies in our executive network. These companies not only seek the fastest SQL response times, but also need to balance performance with the amount of effort required to rewrite their existing SQL statements in reports and applications (aka SQL compatibility) as well as the completeness of being able to execute all of their SQL statements. SQL variability describes the consistency in which response times were observed to ensure end user experience. Among the many interesting insights in our testing and analysis, we found that none of the SQL-on-Hadoop engines tested were able to execute all 99 queries. Accordingly, we measured, aggregated, analyzed and categorized the strengths of the engines with the types of queries run. The purpose of this benchmark is to highlight what companies can expect in evaluating and selecting SQL engines for their Hadoop environments – not to obfuscate such insights by being drawn into debate regarding how to write a given SQL statement. As such, we’ve elected to not publish how every SQL statement was written or the individual performance times for each query in each configuration. This benchmark aggregates and analyzes over 4,000 data points into meaningful, consistent analysis for the readers of this report.
radiantadvisors.com | 3
Benchmark Summary
4,321 Data Points Collected
78
Query Streams Run
The TPC-DS benchmark was selected as the basis due to its industry-wide acceptance, strict execution standards and familiarity in evaluating decision support workloads. The website TPC.org states all of the specifications, instructions and seed data for anyone to execute this test. This independent benchmark followed the TPC-DS instructions for the schema definition, data generator program, 99 ANSI SQL statements and SQL execution streams (query order). The provided data generator program was used to create 100GB and 1TB data sets that were each loaded into HDFS, then into ORC and Parquet data formats for the SQL engine being tested. For some SQL engines, table statistics were gathered and rerun as an independent test. None of the tables were partitioned in any of the tests, and execution of multiple query streams ensured no data was cached. The query execution program was used to execute all 99 SQL queries as specified by the query stream definitions. The query engine was executed locally on the cluster with statistics being captured in log files for later analysis. If an original ANSI SQL statement was not able to be run, an alternate/modified SQL statement would be used in its place. Development of
9
SQL Engine Configurations
5
SQL-on-Hadoop Engines Tested
2
Hadoop Distributions
alternate SQL statements was performed solely by the Radiant Advisors engineering team. Data collection and calculations. Between three and six streams were run to collect each query’s response time per SQL engine configuration. Of the observed values for each query, the highest value (worst performing) was removed from the average and standard deviation calculations. A minimum of three observed values were required for an average calculation. When there were only three values, the standard deviation divided by the average was required to be less than 10% for the data set to be included in the results. Benchmark testing scope. The focus of this benchmark was to gather and analyze data that would assist companies in making decisions regarding the best SQL engine(s) to use in their Hadoop environment based on response time performance, ANSI SQL language compatibility or the amount of SQL modifications needed and performance variability in end user experience. Items that were not in scope for this benchmark included performance related to concurrency, linear performance scalability of nodes or data loading response times. radiantadvisors.com | 4
SQL Compatibility Index Overview Utilizing the 100GB benchmark for overall SQL query executions, we sought to determine the amount of effort required to modify SQL to run with the SQL-on-Hadoop engines. This conversion compares the ANSI SQL statements to the SQL used in each engine. This SQL compatibility index is a 1-5 scale with 5 being the best and 1 being the worst. The best score of 5 represents 100% compatibility and no modifications required to execute. The worst score of 1 represents no ability to execute the SQL function/purpose or requiring very high/expert-level capability – a skillset rarely available. This leaves 2, 3 and 4 to represent high (2), medium (3) and low (4) degree of modification. In this benchmark SQL queries were only assigned a 1, 3 or 5 rating only – meaning not able to execute (1), modified (3), or executed in original form (5). During the benchmark we classified any query as not able to execute under three conditions:
Example of Presto SQL Compatibility calculation: Directions
Calculation
1. 60 Presto queries that ran as-is scored 5
(60 x 5) = 300
2. 18 Presto queries modified to run scored 3
(18 x 3) = 54
3. 21 Presto queries that did not run scored 1
(21 x 1) = 21
4. Weighted total = 300 + 54 + 21
= 375
5. SQL Compatibility Index = 375 / 99 queries
= 3.79
1. If a clear SQL function or operation was specified by the ANSI SQL queries that was not available in the SQL engine. 2. Whenever the level of effort of the SQL modification or rewrite was beyond the expertise typically found in large enterprises. 3. When the execution times were orders of magnitude worse than nominal times and therefore cancelled. radiantadvisors.com | 5
Benchmark Results
radiantadvisors.com | 6
Performance Highlights 100GB Benchmark
2,897 Data Points Collected
46
Query Streams Run
Presto had the best average response time for 25 queries, with an average of 21.36 seconds improvement when compared with the next best times from Hive/Tez. Presto with ORC files performed 65 of its 78 queries better on HDP 2.4 than CDH 5.6, with an average of 2.04 seconds improvement.
Hive executed 6 queries that no other SQL engine could. Hive with Tez (HDP 2.4) had an average response time that was 4 times faster than Hive (CDH 5.6) in 36 of the 38 queries executed.
9
5
SQL Engine Configurations
SQL-on-Hadoop Engines Tested
Gathering statistics for the Impala SQL engine improved performance in only 26 of its 62 queries.
all 99 queries in 100GB benchmark
721.21 s
Hadoop Distributions
Impala had the best average response time for 58 queries, with an average of 2.52 seconds improvement when compared with the next best times from Presto.
Count of Fastest Times
Impala Hive 0.12 Presto 0.57 45.24 s Presto 25 InfiniDB 4.0 s 15.34 Hive 6 Sec Spark SQL 0 None 10
2
58
Spark SQL did not have the best average response time for any of its 30 queries but still outperformed Hive (without Tez) in 29 queries. Spark SQL on Cloudera 5.6 with Parquet files executed 30 queries that were all faster than Hive Cloudera with ORC files (both with and without statistics) but not faster than Impala, Presto or Hive with Tez on Hortonworks.
radiantadvisors.com | 7
Performance Highlights 1TB Benchmark
1,424 Data Points Collected
32
Query Streams Run
Presto had the best average response time for 21 queries, and Presto outperformed Impala (without statistics) for an additional 12 queries by an average improvement of 305 seconds.
6
4
SQL Engine Configurations
SQL-on-Hadoop Engines Tested
2
Hadoop Distributions
Impala with statistics had the best average response time for 58 queries, with an average of 405 seconds improvement when compared with the next best times from Presto in 44 queries.
Count Count of of Fastest Fastest Times Times all all99 99queries queriesin in 100GB benchmark 1TB benchmark
Hive results are reported based on testing only the 10 best queries from the 100GB benchmark, yet none of them yielded best overall average response times in the 1TB benchmark.
Impala Hive 0.12 Presto Presto 0.57 Hive 0
InfiniDB 4.0
Spark SQL 0 None
58 45.24 s s 15.34
21
721.21 s
79 of 99 queries were able to be executed by the SQL engines; 75 by Presto, 68 by Impala and 10 were selected for Hive. Spark SQL queries were not run in the 1TB benchmark.
Sec
20
radiantadvisors.com | 8
Performance Highlights 1TB Benchmark
SQL Execution Count
Count of Queries Run of all 99 queries in 1TB benchmark
Presto Impala Hive (selected)
75 62 10
79 of 99 queries were able to be executed by the SQL engines; 75 by Presto, 62 by Impala. *Only 10 queries were selected to run on Hive at 1TB based on the best average response times on the 100GB benchmark results. Other query results would've been inconsequential.
Impala’s performance improved with statistics on 43 out of 62 queries, or 69% of the time. The average response time improvement was 228.6 seconds in those 43 queries.
69%
of Impala queries improved with statistics
92%
Presto on HDP 2.4 ran slightly faster than Presto on CDH 5.6 in 60 of the 65 average response times with ORC files.
of Presto queries faster on HDP 2.4 than CDH 5.6
Hive with Tez (HDP 2.4) had an average response time that was 5 times faster than Hive (CDH 5.6) in all 10 of its selected queries.
Hive/Tez HDP 2.4 faster than Hive CDH 5.6 *
Hive with Tez in HDP 2.4 was 2.1 times faster than Presto in 9 of the 10 fastest Hive queries selected to run from the 100GB benchmark. This contrasts Presto being 11.5 times faster for the same 10 queries that Hive with Tez was able to run at 100GB.
5x
2.1x
Hive/Tez HDP 2.4 faster than Presto at 1TB *
11.5x
Presto at 100GB faster than Hive/Tez HDP 2.4 *
* for the 10 fastest Hive queries selected to run from the 100GB benchmark radiantadvisors.com | 9
SQL Compatibility Highlights 100GB Benchmark
SQL Execution Count
Count of Queries Run
SQL Compatibility Index
Presto
3.79
Impala
of all 99 queries in 100GB benchmark
3.36
Hive/ Tez
3.06
Hive
2.84
Spark SQL
2.21 1
Presto Hive/Tez
63
Impala
62
Hive Spark
2
3
4
5
78
55 30
89 of the 99 ANSI queries could be executed overall; 78 by Presto, 63 by Hive/Tez, 62 by Impala, 55 by Hive/ HiveServer2, and 30 by Spark SQL.
Presto had the best SQL compatibility index of 3.79 – a measure of the query compatibility and customization required to execute – followed by Impala with 3.36 and Hive/Tez with 3.06.
Presto was able to execute those same 78 queries with Parquet files on CDH 5.6 with similar response times.
See the SQL Compatibility Index section for further details on the Index calculation and each SQL engine. radiantadvisors.com | 10
SQL Compatibility Details 100GB Benchmark
As is
SQL Compatibility Index
3.79
Modified
Presto
Not run
60
3.36
18
3.06
Hive/Tez
As is Modified Not run
38 26 35
Hive/Tez was able to execute 63 of the 99 queries. Of those, 38 queries did not require any modification, 26 queries were modified and 35 were not able to execute. The SQL Compatibility Index for Hive/Tez on Hortonworks is 3.06.
SQL Compatibility Index
2.84 Hive
55
As is Modified
Impala
21
Presto was able to execute 78 of the 99 queries. Of those, 60 queries did not require any modification, 18 queries were modified and 21 queries were not able to execute. The resulting SQL Compatibility Index for Presto is 3.79.
SQL Compatibility Index
SQL Compatibility Index
7
Not run
37
Impala was able to execute 62 of the 99 queries. Of those, 55 queries did not require any modification, 7 queries were modified and 37 were not able to execute. The SQL Compatibility Index for Impala is 3.36.
As is Modified Not run
36 19
SQL Compatibility Index
2.21
45
Hive on Cloudera was able to execute 55 of the 99 queries. Of those, 36 queries did not require any modification, 19 queries were modified and 45 queries did not execute. The SQL Compatibility Index for Hive on Cloudera is 2.84.
Spark SQL
As is Modified Not run
30 0 69
Spark SQL/Parquet on Cloudera was able to execute 30 of the 99 queries. Of those, 30 queries did not require any modification, no queries were modified and 69 queries did not execute. The SQL Compatibility Index for Spark SQL on Cloudera is 2.21. radiantadvisors.com | 11
Benchmark Analysis
radiantadvisors.com | 12
Presto Analysis Presto version 0.141t (released in April 2016) was evaluated in 5 configurations on Cloudera CDH 5.6 and Hortonworks HDP 2.4 utilizing both ORC and Parquet data files at 100GB and 1TB data set sizes. We observed greater breadth of SQL compatibility and ease of use in query modifications, making Presto the easiest to work with. The 78 of 99 queries were executed with little modification and represent the bulk of decision support queries found in enterprise BI systems. The small group of queries not able to execute involved INTERSECT, ROLLUP, EXISTS, EXCEPT or grouping by multiple sets of columns. Presto performed best with ORC data files on Hortonworks rather than ORC or Parquet data files on Cloudera. In the 100GB runs, 61 of the 78 queries that Presto executed were consistently less than 4 seconds. Presto also led with a combination of SQL compatibility and close second-place performance. However, as the volume increased to 1TB, we observed that the statistics collection for Impala and Hive/Tez resulted in a significant performance improvement. Presto does not currently utilize a cost-based optimizer, yet results from other engines with an optimizer show the benefits of utilization for large data sets.
For Cloudera customers, Parquet files for SQL access are the recommended standard by Impala. Presto was also able to run 78 queries with the same Parquet files with respectable and slightly slower performance times than Presto on ORC files. As data volumes start high and continue to grow over time, the concern about locking into a data file format such as Parquet is a risk for maximizing future leverage. Presto’s ability to execute on Parquet represents two long-term SQL options for Cloudera customers. For Hortonworks customers, the ORC data files were tested for both Hive/Tez and Presto. In the 100GB benchmark Presto performed better in nearly every query, but when the volume increased to 1TB, Hive/Tez outperformed Presto for the highestperforming 10 Hive queries chosen to be tested, likely due to the cost-based optimizer. The other Hive queries were not tested because they would not execute in a reasonable time frame. At this configuration with lower data volumes, we expect performance between Presto and Impala to be similar, but Hive and Hive with Tez is noticeably slower. For volumes increased to 1TB, a cost-based optimizer SQL engine will have the performance advantage. Overall, Presto is an important player in the SQL-on-Hadoop landscape with its SQL completeness and openness to run on both Cloudera or Hortonworks platforms and data file standards. Note: The upcoming Presto version 0.148t in July 2016 addresses SQL capabilities for INTERSECT, ROLLUP, CUBE and grouping sets that would increase its ANSI SQL capabilities. radiantadvisors.com | 13
Impala Analysis Impala version 2.4 was run only on the Cloudera CDH 5.6 cluster at 100GB and 1TB volumes with and without gathering statistics. The Impala queries had good ANSI SQL compatibility and were reasonable to work with in modifying the statements to run 62 out of 99 queries. Typical challenges were encountered with INTERSECT, ROLLUP, EXISTS and EXCEPT. While the intent of a query could be rewritten, we found an increased amount of modification work necessary. Impala had the majority of best average performance times at both 100GB and 1TB data sets. In the 100GB runs, all 62 queries that Impala could execute were consistently less than 4 seconds.
For Cloudera customers, Impala’s solid performance with their cost-based performance and Parquet file standard is a clear choice to date but also requires a well-managed table statistics gathering process. Hive with HiveServer2 has dramatically slower performance but does bring stability with long-running queries and also extends the SQL coverage from Impala where required. This requires a degree of planning between which data sets will reside in ORC or Parquet formats for each SQL engine. Presto demonstrated shared support for Parquet files, which opens up another option for SQL access to Impala data sets.
At the higher 1TB data volume, Impala was still fastest in the same 58 queries but the cost-based optimizer leveraging statistics improved the performance in 43 of the 58 queries. Without statistics being gathered, Presto’s non-cost based optimizer would have claimed best times for another 10 queries that averaged 305 seconds improvement. It’s important to note that this would not have helped Hive’s cost-based optimizer, which was still significantly slower than Impala without statistics.
radiantadvisors.com | 14
Hive Analysis Hive 1.1.0 and HiveServer2 with ORC files on Cloudera 5.6 and Hive 1.2.1 with Tez with ORC files on Hortonworks 2.4 were tested with and without statistics. Hive and HiveQL scored poorly in SQL compatibility, due to the challenges and effort involved with the required SQL modifications. Hive with Tez able to run 63 of the 99 queries while Hive with HiveServer2 on Cloudera was only able to execute 55 of the queries. Non-executing queries included non-SQL compatible as well as queries outside of a reasonable performance threshold. For the point of this performance-oriented benchmarks, a Hive query’s execution became inconsequential if the response time was hundreds or thousands of times greater than Impala or Presto.
For Hortonworks customers, Hive 1.2.1 with Tez utilizing ORC files and table statistics offers the best performance as data volumes scale up. In lower volumes or very selective predicates, Presto will be significantly faster, without the need to collect statistics and with greater SQL ease of use. Once again, leveraging both SQL engines is a viable option when considering data set volume and operational performance needs. For Cloudera customers, Parquet file standard for SQL access and Impala has likely been already adopted. However, if you do have ORC files in the Cloudera cluster that required SQL access, you’re better off leveraging Presto than Hive 1.1 in Cloudera 5.6.
Notably, Hive could execute 6 queries that neither Impala nor Presto were able to. Characteristics of these 6 queries involved ROLLUP, INTERSECT and a comparative sub-query with GROUPING. In all Hive queries executed and data sets gathered, Hive had the highest performance consistency in response times, never having a standard deviation that was higher than 5% of the average response time. This predictability is crucial for largescale queries that get operationalized and batch oriented. Hive 1.2.1 with Tez on Hortonworks 2.4 leveraging statistics improved its performance an average 5x compared to Hive 1.1 and HiveServer2 on Cloudera 5.6 for the 10 fastest queries from the 100GB benchmark. radiantadvisors.com | 15
Spark SQL Analysis
Spark SQL was tested on Cloudera 5.6 with Parquet files and demonstrated the lowest SQL compatibility index of 2.21. This is most likely due to its maturity for processing the SQL construct and also likely due to its lack of requirements for interactive SQL for decision support work. Spark intends processing and manipulation of data to happen more programmatically, leveraging its RDDs, rather than processing within the SQL access itself. This design aligns well with the purpose and intentions for Spark itself. The Spark performance results were still very respectable overall, outperforming Hive and HiveServer2 and trailing behind Hive with Tez by an average factor of 1.78x for the 30 queries that Spark SQL was able to execute.
For both Hortonworks and Cloudera customers, leveraging Spark will continue to provide tremendous value for data science projects. These typically begin with fetching various data sets from Hadoop or other data stores to be used with RDDs and subsequent RDDs. Programmatic data prep and cleansing routines, rather than SQL transformations, are leveraged prior to calling Spark’s machine learning and graph processing routines. While Spark’s in-memory processing capabilities are welldocumented, it’s not intended to be utilized for non-programmatic iterative and interactive queries that support data analysis and decision support applications.
radiantadvisors.com | 16
Closing
radiantadvisors.com | 17
Notable Considerations SQL Performance verus SQL Compatibility
Spark SQL Performance
In the 100GB benchmark, Impala had the best performance time for 51 queries over Presto. However, the average Impala response times over Presto was only 2.55 seconds faster. This small performance difference may be considered a tradeoff, considering the 16 additional queries that Presto could run over Impala. At the 1TB benchmark volume the average response time difference was 404 seconds. Evaluators will need to be keen as to which factor is more important in their environment and consider that both facets will be improving over time by each SQL engine.
With the growing awareness and adoption of Spark in Hadoop environments, most notably for its in-memory performance, some people might find its low SQL performance and compatibility surprising. While Spark SQL wasn’t the highest-performing SQL engine, it is intended for increasing analytic job performance rather than interactive query performance. Keep in mind that the SQL performance benchmark measures the response time to read data from the HDFS each time. Spark’s resilient distributed data set (RDD) is designed for Spark programmers to hold data sets in memory returned from executed SQL statements for further processing. This capability increases the performance for analytic jobs with repeated iterative processing, but is limited only to the job being able to access the RDD.
Statistics Collection Process
Data File Format Strategy
There was a clear distinction in the importance of collecting statistics for cost-based optimizers such as Impala in a changing environment where data is added or deleted frequently. SQL performance will always be degrading over time without regular statistics collection.
The two predominate file format standards used by these SQL engines are ORC or Parquet. The importance of this consideration is usually related to accessing data by other Hadoop and YARN applications without having to duplicate data in Hadoop.
radiantadvisors.com | 18
Configuration Teradata Appliance for Hadoop is delivered as a fully integrated system that’s ready to plug-in and use. It is purpose-built for big data storage, refinement, exploratory analysis and management of multi-structured data. The appliance can hold up to 576TB uncompressed data per cabinet, with the entire system scaling up to 46PB. The appliance is networked by Teradata’s fabric-based computing, a high throughput BYNET™ V5 on dual 40GB/s InfiniBand interconnects for fast data exchange between Hadoop nodes on the appliance as well as with Teradata and Teradata Aster appliances. The Appliance features Teradata supported Hadoop software from either Hortonworks or Cloudera on a proven Teradata hardware platform with latest Intel® processors and enterprise-class storage – all preinstalled into a power-efficient unit. Performance nodes are configured as Dual 12-core CPUs, 24 x 1.2TB of storage, 256-512GB of RAM. Performance configuration is optimized for computation, with faster CPU, higher memory for IO intensive workloads, and ideally suited for streaming applications running Spark, Storm, and SQL-on-Hadoop tools such as Presto and Impala. Dual Intel Twelve-Core Xeon® processors @ 2.5GHz per node (Hadoop Master/Edge/Balanced Data/Performance Data Nodes) High throughput BYNET™ V5 on a dual 40GB/s InfiniBand interconnect Performance Data nodes for Hadoop with 24 1.2TB drives: 9.6TB user space
Hortonworks HDP 2.4
Cloudera CDH 5.6
ORC non-partitioned files Parquet files with SNAPPY compression
ORC non-partitioned files Parquet files with SNAPPY compression
Hive 1.2.1 / Tez 0.7.0 Presto 0.141t Spark SQL 1.6
Impala 2.4 Hive 1.1.0 with HiveServer2 Presto 0.141t Spark SQL 1.5
radiantadvisors.com | | 19 19 radiantadvisors.com
References TPC Benchmark Organization
Acknowledgments Teradata engaged Radiant Advisors to perform a fully independent performance benchmark and analysis of SQLon-Hadoop configurations widely in use at companies. While Teradata is the sole sponsor of this report, Radiant Advisors has independently set up the configurations, testing approach, data generation, benchmark execution, all data collection, and analysis. All findings and analysis are solely those of Radiant Advisors.
Radiant Advisors Team: John O'Brien Krish Hariharan Julie Langenkamp Pravin Gorwade Special thanks to Teradata Labs for their support during this benchmark.
TPC-DS (decision support) specification and downloads Teradata Presto - Teradata's Open Source Presto Prestodb.io - Distributed SQL Query Engine for Big Data Hortonworks.com - HDP distribution of Apache Hadoop Cloudera.com - CDH distribution and Impala Parquet.io - Columnar file format for Hadoop
Join the Conversation This independent performance benchmark is intended to be part of an open, ongoing conversation to share and learn from each other’s experiences and perspectives. Please share your comments online with us at RadiantAdvisors.com or on Twitter #SQLonHadoop @RadiantAdvisors. radiantadvisors.com | 20
Radiant Advisors Independent Benchmark Report July 2016 Copyright Š 2016 Radiant Advisors. All Rights Reserved. Hadoop, Sqoop, and the Hadoop elephant logo are trademarks of the Apache Software Foundation. All other trademarks, registered trademarks, product names, and company names or logo mentioned in this document, include Radiant Advisors and its affiliated brands and logos, are the property of their respective owners. Reference to any products, services, processes, or other information, by trade name, trademark, manufacturer, and supplier or otherwise does not constitute or imply endorsement, sponsorship, or recommendation thereof by Radiant Advisors. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any or by any means, or for any purpose, without the express written permission of Radiant Advisors. Radiant Advisors may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this report. Except as expressly provided in any written license agreement from Radiant Advisors, the furnishing of this document does not give you the license to these patents, trademarks, copyrights, or other intellectual property. The information in this document is subject to change without notice. Radiant Advisors has made every effort to ensure that all statements and information contained in this document are accurate, but accepts no liability for any error or omission in the same.