[PDF Download] Applied reinforcement learning with python: with openai gym, tensorflow, and keras be

Page 1


Applied Reinforcement Learning with Python: With OpenAI Gym, Tensorflow, and Keras Beysolow Ii

Visit to download the full and correct content document: https://textbookfull.com/product/applied-reinforcement-learning-with-python-with-open ai-gym-tensorflow-and-keras-beysolow-ii/

More products digital (pdf, epub, mobi) instant download maybe you interests ...

Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy

https://textbookfull.com/product/reinforcement-learning-withopen-ai-tensorflow-and-keras-using-python-1st-edition-abhisheknandy/

Biota Grow 2C gather 2C cook Loucas

https://textbookfull.com/product/biota-grow-2c-gather-2c-cookloucas/

Deep Learning with Python Develop Deep Learning Models on Theano and TensorFLow Using Keras Jason Brownlee

https://textbookfull.com/product/deep-learning-with-pythondevelop-deep-learning-models-on-theano-and-tensorflow-usingkeras-jason-brownlee/

Applied Natural Language Processing with Python: Implementing Machine Learning and Deep Learning Algorithms for Natural Language Processing 1st Edition Taweh Beysolow Ii

https://textbookfull.com/product/applied-natural-languageprocessing-with-python-implementing-machine-learning-and-deeplearning-algorithms-for-natural-language-processing-1st-edition-

Deep Learning with Applications Using Python Chatbots and Face, Object, and Speech Recognition With TensorFlow and Keras Springerlink (Online Service)

https://textbookfull.com/product/deep-learning-with-applicationsusing-python-chatbots-and-face-object-and-speech-recognitionwith-tensorflow-and-keras-springerlink-online-service/

Applied Neural Networks with TensorFlow 2: API Oriented

Deep Learning with Python Orhan Gazi Yalç■n

https://textbookfull.com/product/applied-neural-networks-withtensorflow-2-api-oriented-deep-learning-with-python-orhan-gaziyalcin/

Deep Learning Projects Using TensorFlow 2: Neural Network Development with Python and Keras 1st Edition Vinita Silaparasetty

https://textbookfull.com/product/deep-learning-projects-usingtensorflow-2-neural-network-development-with-python-andkeras-1st-edition-vinita-silaparasetty/

Applied Neural Networks with TensorFlow 2 API Oriented

Deep Learning with Python 1st Edition Orhan Gazi Yalc■n Yalç■n Orhan

https://textbookfull.com/product/applied-neural-networks-withtensorflow-2-api-oriented-deep-learning-with-python-1st-editionorhan-gazi-yalcin-yalcin-orhan/

Beginning Anomaly Detection Using Python-Based Deep Learning: With Keras and PyTorch Sridhar Alla

https://textbookfull.com/product/beginning-anomaly-detectionusing-python-based-deep-learning-with-keras-and-pytorch-sridharalla/

Applied Reinforcement Learning with Python

With OpenAI Gym, Tensorf low, and Keras

Taweh Beysolow II

Applied Reinforcement Learning with Python

With OpenAI Gym, Tensorflow, and Keras

Applied Reinforcement Learning with Python: With OpenAI Gym, Tensorflow, and Keras

Taweh Beysolow II

San Francisco, CA, USA

ISBN-13 (pbk): 978-1-4842-5126-3

https://doi.org/10.1007/978-1-4842-5127-0

Copyright © 2019 by Taweh Beysolow II

ISBN-13 (electronic): 978-1-4842-5127-0

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.

Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark.

The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.

While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.

Managing Director, Apress Media LLC: Welmoed Spahr

Acquisitions Editor: Celestin Suresh John

Development Editor: Rita Fernando

Coordinating Editor: Divya Modi

Cover designed by eStudioCalamar

Cover image designed by Freepik (www.freepik.com)

Distributed to the book trade worldwide by Springer Science+Business Media New York, 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail orders-ny@springer-sbm.com, or visit www.springeronline.com. Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.

For information on translations, please e-mail rights@apress.com, or visit http://www.apress. com/rights-permissions.

Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook versions and licenses are also available for most titles. For more information, reference our Print and eBook Bulk Sales web page at http://www.apress.com/bulk-sales.

Any source code or other supplementary material referenced by the author in this book is available to readers on GitHub via the book's product page, located at www.apress.com/978-1-4842-5126-3. For more detailed information, please visit http://www.apress.com/source-code.

Printed on acid-free paper

This book is dedicated to my friends and family who supported me through the most difficult of times for the past decade. They have enabled me to be the person I am capable of being when operating at my best. Without you, I would not have the ability to continue living as happily as I am.

About the Author

Taweh Beysolow II is a data scientist and author currently based in the United States. He has a bachelor of science in economics from St. Johns University and a master of science in applied statistics from Fordham University. After successfully exiting the start-up he co-founded, he now is a Director at Industry Capital, a San Francisco–based private equity firm, where he helps lead the cryptocurrency and blockchain platforms.

About the Technical Reviewer

Santanu Pattanayak currently works at GE Digital as a Staff Data Scientist and is author of the deep learning book Pro Deep Learning with TensorFlow (Apress, 2017). He has 8 years of experience in the data analytics/ data science field and also has a background in development and database technologies. Prior to joining GE, Santanu worked in companies such as RBS, Capgemini, and IBM. He graduated with a degree in electrical engineering from Jadavpur University, Kolkata, and is an avid math enthusiast. Santanu is currently pursuing a master’s degree in data science from the Indian Institute of Technology (IIT), Hyderabad. He also devotes his time to data science hackathons and Kaggle competitions where he ranks within the top 500 across the globe. Santanu was born and brought up in West Bengal, India, and currently resides in Bangalore, India, with his wife.

Acknowledgments

I would like to thank Santanu, Divya, Celestin, and Rita. Without you, this book would not be nearly as much of a success as it will be. Secondarily, I would like to thank my family and friends for their continued encouragement and support. Life would not be worth living without them.

Introduction

It is a pleasure to return for a third title with Apress! This text will be the most complex of those I have written, but will be a worthwhile addition to every data scientist and engineer’s library. The field of reinforcement learning has undergone significant change in the past couple of years, and it is worthwhile for everyone excited with artificial intelligence to engross themselves in.

As the frontier of artificial intelligence research, this will be an excellent starting point to familiarize yourself with the status of the field as well as the most commonly used techniques. From this point, it is my hope that you will feel empowered to continue on your own research and innovate in your own respective fields.

CHAPTER 1 Introduction to Reinforcement Learning

To those returning from my previous books, Introduction to Deep Learning Using R1 and Applied Natural Learning Using Python, 2 it is a pleasure to have you as readers again. To those who are new, welcome! Over the past year, there have continued to be an increased proliferation and development of Deep Learning packages and techniques that revolutionize various industries. One of the most exciting portions of this field, without a doubt, is Reinforcement Learning (RL). This itself is often what underlies a lot of generalized AI applications, such as software that learns to play video games or play chess. The benefit to reinforcement learning is that the agent can familiarize itself with a large range of tasks assuming that the problems can be modeled to a framework containing actions, an environment, an agent(s). Assuming that, the range of problems can be from solving simple games, to more complex 3D games, to teaching self-driving cars how to pick up and drop off passengers in a

1New York: Apress, 2018.

2New York: Apress, 2017.

© Taweh Beysolow II 2019

T. Beysolow II, Applied Reinforcement Learning with Python, https://doi.org/10.1007/978-1-4842-5127-0_1

Chapter 1 IntroduCtIon to reInforCement LearnIng

variety of different places as well as teaching a robotic arm how to grasp objects and place them on top of a kitchen counter.

The implications of well-trained and deployed RL algorithms are huge, as they more specifically seek to drive artificial intelligence outside of some of the narrow AI applications spoken about in prior texts I have written. No longer is an algorithm simply predicting a target or label, but instead is manipulating an agent in an environment, and that agent has a set of actions it can choose to achieve a goal/reward. Examples of firms and organizations which devote much time to researching Reinforcement Learning are Deep Mind as well as OpenAI, whose breakthroughs in the field are among the leading solutions. However, let us give a brief overview of the history of the field itself.

History of Reinforcement Learning

Reinforcement Learning in some sense is a rebranding of optimal control, which is a concept extending from control theory. Optimal control has its origins in the 1950s and 1960s, where it was used to describe a problem where one is trying to achieve a certain “optimal” criterion and what “control” law is needed to achieve this end. Typically, we define an optimal control as a set of differential equations. These equations then define a path toward values that minimize the value of the error function. The core of optimal control is the culmination of Richard Bellman’s work, specifically that of dynamic programming. Developed in the 1950s, dynamic programming is an optimization method that emphasizes the solving of a large individual problem by breaking it down into smaller and easier-to-solve components. It is also considered the only feasible method of solving stochastic optimal control problems and moreover consider in general all of optimal control to be reinforcement learning.

Chapter 1 IntroduCtIon to reInforCement LearnIng

Bellman’s most notable contribution to optimal control is that of the Hamilton-Jacobi-Bellman (HJB) equation. The HJB equation

 Vx tV xt Fx uC xu u ,, ,, () +Ñ () () + () {} = × min, 0

st ,Vx TD X () = ()

where  Vx t, () = the partial derivate of V w.r.t. the time variable t. a · b,  Vx t, () = Bellman value function (unknown scalar) or the cost incurred from starting in state x at time t and controlling the system optimally until time T, C = the scalar cost rate function, D = final utility state function, x(t) = system state vector, x(0) = an assumed given, u(t) for 0 ≤ t ≤ T.

The solution yielded from this equation is the value function, or the minimum cost for a given dynamic system. The HJB equation is the standard method by which one solves an optimal control problem. Furthermore, dynamic programming is generally the only feasible way or method for solving stochastic optimal control problems. One of these problems, which dynamic programming was developed to help solve, is Markov decision processes (MDPs).

MDPs and their Relation to

Reinforcement Learning

We describe MDPs as discrete time stochastic control process. Specifically, we define discrete time stochastic processes as a random process in which the index variable is characterized by a set of discrete, or specific, values (in contrast to continuous values). MDPs are specifically useful for situations in which outcomes are partially affected by participants in the process but the process also exhibits some degree of randomness as well. MDPs and dynamic programming thus become the basis of reinforcement learning theory.

Chapter 1 IntroduCtIon to reInforCement LearnIng

Plainly stated, we assume based on a Markov property that the future is independent of the past given the present. In addition to this, this state is considered sufficient if it gives us the same description of the future as if we have the entirety of the historical information. This in essence means that the current state is the only piece of information that will be relevant and that all historical information is no longer necessary. Mathematically, a state is said to have the Markov property iff

Markov processes themselves are considered to be memory-less, in that they are random transitions from state to state. Furthermore, we consider them to be a tuple (S, P) on a state space S where states change via a transition function P, defined as the following:

where S = Markov state, St = next state.

This transition function describes a probability distribution, where the distribution is the entirety of the possible states that agent can transition to. Finally, we have a reward that we receive from moving from one state to another, which we define mathematically as the following:

where γ = discount factor, γ ∈ [0, 1], Gt = total discounted rewards, R = reward function.

We therefore define a Markov reward process (MRP) tuple as (S, P, R, γ).

With all of these formulae now described, the image in Figure 1-1 is an example of a Markov decision process visualized.

Figure 1-1 shows how an agent can, with varying probability, move from one state to another, receiving a reward. Optimally, we would learn to choose the process that accumulated the most rewards in a given episode before we failed given the parameters of the environment. This, in essence, is a very basic explanation of reinforcement learning.

Another important component of the development of Reinforcement Learning was trial and error learning, which was one method of studying animal behavior. Most specifically, this has proven useful for understanding basic reward and punishment mechanisms that “reinforce” different behaviors. The words “Reinforcement Learning” however would not appear until the 1960s. During this period, the idea of the “credit-assignment problem” (cap) would be introduced, specifically by Marvin Minsky. Minsky was a cognitive scientist who devoted much of his lifetime to artificial intelligence, such as his book Perceptrons (1969) and the paper in which he describes the credit assignment problem, “Steps Toward Artificial Intelligence” (1961). The cap asks how does one distribute “credit” for success with respect to all the decisions that were

Figure 1-1. Markov Decision Process

Chapter 1 IntroduCtIon to reInforCement LearnIng

made in achieving that success. Specifically, many reinforcement learning algorithms are directly devoted to solving this precise problem. With this being stated, however, trial and error learning largely became less popular, as neural network methods (and supervised learning in general) such as innovations forwarded by Bernard Widrow and Ted Hoff took up most of the interest within the field of AI. However, a resurgence of interest in the field is most notable in the 1980s, when temporal difference (TD) learning truly takes wind as well as with the development of Q learning.

TD learning specifically was influenced by, ironically, another aspect of animal psychology that Minsky pointed out as being important. It comes from the idea of two stimuli, a primary Reinforcer that becomes paired with a secondary Reinforcer and subsequently influences behavior. TD learning itself, however, was largely developed by Richard S. Sutton. He is considered to be one of the most influential figures in the field of RL as his doctoral thesis introduced the idea of temporal credit assignment. This refers to how rewards, particularly in very granular state-action spaces, can be delayed. For example, winning a game of chess requires many actions before one has achieved the “reward” of winning the game. As such, reward signals do not have significant effect on temporally distant states. As such, temporal credit assignment solves for how you reward these granular actions in such a way that meaningfully affect temporally distant states. Q learning, named for the “Q” function that yields the reward, builds on some of these innovations and focuses on finite Markov decision processes.

With Q learning, this brings us to the present day, where further improvements on reinforcement learning are continually being made and represent the bleeding edge of AI. With this overview being complete, however, let us more specifically discuss what readers can be expected to learn.

Reinforcement Learning Algorithms and RL Frameworks

Reinforcement learning analogously is very similar to the domain of supervised learning within traditional machine learning, although there are key differences. In supervised learning, there is an objective answer that we are training the model to predict correctly, whether that is a class label or a particular value, based on the input features from a given observation(s). Features are analogous to the vectors within the given state of an environment, which we feed to the reinforcement learning algorithm typically either as a series of states or individually from one state to the next. However, the main difference is that there is not necessarily always one “answer” to solve the particular problem, in that there are possibly multiple ways by which a reinforcement learning algorithm could successfully solve a problem. In this instance, we obviously want to choose the answer that we can arrive at quickest that simultaneously solves the problem in as efficient a manner as possible. This is precisely where our choice of model becomes critical.

In the prior overview of the history of RL, we introduced several theorems which you will be walked through in detail in the following chapters. However, being that this is an applied text, theory must also be supplied alongside examples. As such, we will be spending a significant amount of time in this text discussing the RL framework OpenAI Gym and how it interfaces with different Deep Learning Frameworks. OpenAI Gym is a framework that allows us to easily deploy, compare, and test Reinforcement Learning algorithms. However, it does have a great degree of flexibility, in that we can utilize Deep Learning methods alongside OpenAI gym, which we will do in our various proofs of concepts. The following shows some simple example code that utilizes the package and the plot that shows the video yielded from the training process (Figure  1-2 ).

import gym

def cartpole():

environment = gym.make('CartPole-v1') environment.reset() for _ in range(50): environment.render()

action = environment.action_space.sample()

observation, reward, done, info = environment.

step(action)

print("Step {}:".format(_))

print("action: {}".format(action))

print("observation: {}".format(observation))

print("reward: {}".format(reward))

print("done: {}".format(done))

print("info: {}".format(info))

When reviewing the code, we notice that when working with gym, we must initialize an environment in which our algorithms sit. Although it is common to work with environments provided by the package, we can also create our own environments for custom tasks (like video games not provided by gym). Moving forward however, let us discuss the other variables defined worth noting as shown from the terminal output as follows.

Figure 1-2. Cart Pole Video Game

action: 1

observation: [-0.02488139 0.00808876 0.0432061 0.02440099] reward: 1.0

done: False info: {}

The variables can be broken down as follows:

• Action – Refers to action taken by the agent within an environment that subsequently yields a reward

• Reward – Yielded to the agent. Indicates the quality of action with respect to accomplishing some goal

• Observation – Yielded by the action: Refers to the state of the environment after an action has been performed

• Done – Boolean that indicates whether the environment needs to be reset

• Info – Dictionary with miscellaneous information for debugging

The process flow that describes the actions is shown in Figure 1-3.

Figure 1-3. Process Flow of RL Algorithm and Environment

To provide more context, Figure 1-2 shows a cart and a pole video game, where the objective is to successfully balance the cart and the pole such that the pole never tilts over. As such, a reasonable objective would be to train some DL or ML algorithm such that we can do this. We will tackle this particular problem later in the book however. The purpose of this section is just to briefly introduce OpenAI Gym.

Q Learning

We briefly discussed Q learning in the introduction; however, it is worthwhile to highlight the significant portion of this text we will utilize to discuss this topic. Q learning is characterized by the fact that there is some police, which informs an agent of the actions to take in different scenarios. While it does not require a model, we can use one, and it specifically is often applied for finite Markov decision processes. Specifically, the variants we will tackle in this text are Q learning, Deep Q Learning (DQL), and Double Q Learning (Figure 1-4). Chapter 1 IntroduCtIon to reInforCement LearnIng

1-4. Q Learning Flow Chart Chapter 1 IntroduCtIon to reInforCement LearnIng

We will discuss this more in depth in the chapters that specifically reference these techniques; however, Q learning and Deep Q Learning each have respective advantages given the complexity of the problem, while both often suffering from similar downfalls.

Actor-Critic Models

The most advanced of the models we will be tackling in this book are the Actor-Critic Models, which are comprised of the A2C and A3C. Both of these respectively stand for Advantage Actor-Critic and Asynchronous Advantage Actor-Critic models. While both of these are virtually the same, the difference is that the latter has multiple models that work alongside each other and update the parameters independently while the former updates its parameters for all of the models simultaneously. These models update on a more granular basis (action to action) rather than in an episodic manner as many of the other Reinforcement Learning algorithms do. Figure 1-5 shows an example of the Actor-Critic Models visualized.

Figure

Chapter 1 IntroduCtIon to reInforCement LearnIng

Figure 1-5. Actor-Critic Models Visualized

Applications of Reinforcement Learning

After the reader has been thoroughly introduced to the concepts of reinforcement learning, we will tackle multiple problems where the focus will be showing the reader how to deploy solutions that we will be training and utilizing on cloud environments.

Classic Control Problems

Being that the field of optimal control has been around for roughly the past 60 years, there are a handful of problems that we will begin tackling first that users will see often referenced in other reinforcement learning literature. One of them is the cart pole problem, which is referenced in Figure 1-2. This is a game in which the user is required to try and balance a cart pole using the optimal set of options. Another one of these is shown in Figure 1-6, called Frozen Lake, in which the agent learns how to cross a lake which is frozen without stepping on the ice that would cause the agent to fall through.

Figure 1-6. Frozen Lake Visualized

Super Mario Bros.

One of the most beloved video games of all time turns out to be one of the best ways to display how the use of reinforcement learning in artificial intelligence can be applied to virtual environments. With the help of the py_nes library, we are able to emulate Super Mario Bros. (Figure 1-7) and then utilize the data from the game such that we can train the model to play the level. We will focus on one level exclusively and will be utilizing AWS resources for this application, giving readers an opportunity to gain experience in this task.

Chapter 1 IntroduCtIon to reInforCement LearnIng

Doom

A classic reinforcement learning example that we will apply here is learning to play a simple level of the video game Doom (Figure 1-8).

Originally released in the 1990s on the PC, the focus of this video game is to successfully kill all the demons and/or enemies you face while making it through the entirety of the level. However, this makes for an excellent application of Deep Q Learning given the scope of actions, the packages available, among other helpful attributes.

Figure 1-7. Super Mario Bros.

Reinforcement-Based Marketing Making

A common strategy for different proprietary trading firms is to make money by providing liquidity to participants with the objective of buying and selling an asset at any given price. While there are established techniques for this strategy, this is an excellent arena to apply reinforcement learning to as the objectives are relatively straightforward and it is a data-rich field. We will be working with limit order book data from Lobster, a web site which contains a large amount of excellent order book data for experiments such as this. In Figure 1-9, we can see what an example of an order book would look like.

Figure 1-8. Doom Screenshot Chapter

Chapter 1 IntroduCtIon to reInforCement LearnIng

Figure 1-9. Limit Order Book

Sonic the Hedgehog

Another classic video game that is appropriate for us to utilize different models on will be Sonic the Hedgehog (Figure 1-10). Except in this particular chapter, we will walk the reader through the process of creating their own environment from scratch that they can wrap an environment utilizing OpenAI gym and custom software, and then training their own Reinforcement Learning algorithm to then solve the problem of the level. This again will utilize AWS resources for training, piggybacking off of the same processes that were utilized in the other video game examples, specifically Super Mario Bros.

Chapter 1 IntroduCtIon to reInforCement LearnIng

Figure 1-10. Sonic the Hedgehog

Conclusion

The purpose of this text will be to familiarize readers with how to apply Reinforcement Learning in the various contexts that they work in. Readers should be familiar with Deep Learning Frameworks such as Tensorflow and Keras, from which we will be working to deploy many of the Deep Learning models used in conjunction with. While we will take time to explain reinforcement learning theory, and some of that which overlaps with Deep Learning might be explained, the majority of this text will be dedicated to discussing theory and application of RL. With that being said, let us begin by discussing the basics of Reinforcement Learning in depth.

Another random document with no related content on Scribd:

sketch of, 309, 310; selected by Hamilton and King as Federalist candidate for President, in 1796, 310.

Pintard, John, chief of Tammany Society, 148.

Porcupine’s Gazette, active in urging war with France, 350-60; publishes Martin’s attacks on Jefferson, 352, 353; abusive to Democrats, 354, 355; on Lyon-Griswold fight in House, 361.

Powell, Mrs. Samuel, aunt of Mrs. William Bingham, 132.

Priestley, Joseph, English liberal, addresses Tammany and other ‘Democratic Societies’ in New York, 259.

Randolph, Edmund, Attorney-General under Washington, considers Hamilton’s Bank Bill unconstitutional, 77; on reception of Genêt, 215; succeeds Jefferson as Secretary of State, 239; and French Minister Faucet, 285; is dismissed from Cabinet, 286.

Read, Jacob, Senator from South Carolina, denounced in Charleston for supporting Jay Treaty, 281.

Reign of Terror, Alien and Sedition Laws produce, in 1798, 380-82; continued through two years, 383; riotings, 384; victims, 386-93, 398-406.

Report on Manufactures, Hamilton’s, 161; newspaper comments on, 161.

Report on the Public Credit, Hamilton’s, 43-68; debated in Congress, 44.

Reynolds, James, seeks to blackmail Hamilton, 187.

Ricketts, John, proprietor of the Circus, Philadelphia, 138.

Rights of Man, by Thomas Paine, copy lent by printer to Jefferson, 82; in returning borrowed copy to printer Jefferson writes note commending pamphlet, 83; Jefferson’s note used by printer as preface, 83; effect of publication, 83, 84; newspaper controversy over, 83, 84.

Rittenhouse, David, scientist and friend of Jefferson, 149; and Jefferson in library of Philosophical Society, Philadelphia, 156; aids in preparations for reception of Genêt, 219; president of Democratic Club of Philadelphia, 223.

Rochefoucauld-Liancourt, Duc de La, on Philadelphia, 124, 125; in Philadelphia, 135.

Rush, Dr. Benjamin, writes letters to Maclay against Assumption, 61; on Paine’s Rights of Man, 84; letter to Burr, 147; Jefferson’s friend, 149; in yellow fever epidemic in Philadelphia, 237.

Rutledge, John, denounces Jay Treaty, 280; appointment as Chief Justice not confirmed, 289.

Saint Cecilia Society, Democratic Club in Charleston, 223.

St. Clair, General Arthur, failure of expedition against Indians made issue by Jeffersonians in campaign of 1792, 175.

Schuyler, Philip, father-in-law of Hamilton, elected Senator from New York, 36;

letter of Hamilton to, on Washington, 41, 42; and the Assumption Bill, 62.

‘Scrippomony,’ Jefferson on, 87.

Sedition Bill, purpose to crush Jeffersonian press, 376, 377; debates on, in Congress, marked by disorder, 378; passed by small margin, 380.

Sedgwick, Theodore, speculator in public securities, defends Funding Bill, 48, 49; on funding of debt, 48, 49, 50; on Madison’s plan to amend Funding Bill, 55; speech on the Assumption Bill, 62; and Excise Bill, 72; and amendment to Excise Bill, 73; on Giles’s resolutions attacking Treasury management, 201; recommended Adams’s nomination as Vice-President, in 1789, 325; on results of 1798 elections, 383.

Sedgwick, Mrs. Theodore, 134.

Sherman, Roger, Representative and Senator from Connecticut, on titles, 3.

Sign of the Sorrel Horse, Philadelphia tavern, 119.

Smith, Mrs. Margaret Bayard, on Jefferson, 92, 93.

Smith, Samuel, on Madison commerce resolutions, 241.

Smith, Jeremiah, on Philadelphians, 116.

Smith, William, Representative from South Carolina, on Madison’s amendment to Funding Bill, 55; chosen director of Bank of United States, 90; on Giles’s resolutions attacking Treasury management, 201, 203; on Madison’s commerce resolutions, 240, 242.

Southwark Theater, Philadelphia, 137.

Speculation, in government securities, 44-47; members of Congress involved, 46-48; in stock and scrip, 87; fraud and counterfeiting, 88; Hamilton shocked and concerned, 88; bubble bursts in 1792, 176; Hamilton’s policies charged as cause of panic, 177; newspaper comments on, 177.

Spooner’s Vermont Journal, on the Jay Treaty, 283.

Steele, John, North Carolina, 181.

Stewart, Mrs. Walter, daughter of Blair McClenachan, social leader of Philadelphia, 132.

Strong, Caleb, Senator from Massachusetts, 9; and the Assumption Bill, 62.

Sullivan, James, lawyer, pamphleteer, and orator for the Democrats, 145.

Tammany, Sons of, rival organization to Society of the Cincinnati, 148; at first non-partisan, then fervid Jeffersonians, 148.

Tariff, in First Congress, 19; in Second Congress, 161; Hamilton’s Report on Manufactures excites little attention, 161.

Taylor, John, of Caroline, a Jeffersonian leader in Virginia, 149, 150; Jeffersonian leaders confer at home of, 205; pamphlet analyzing vote in Congress vindicating Hamilton, attributed to, 205, 206; introduces Virginia Resolutions in Legislature, 409.

Tilley, Count, 135.

Treaty with the Southern Indians, Washington’s attitude on presentation to the Senate, 21, 22.

Trumbull, John, paints portrait of Hamilton, 162.

Tucker, George, editor of Blackstone’s Commentaries, 169.

Twining, Thomas, in Philadelphia, 120.

United States Chronicle, on Freneau’s attacks on Hamilton, 164.

Venable, Abraham B., of deputation from Congress to Hamilton on the Reynolds charges, 187.

Vermont Journal, on Hamilton’s Passaic Falls scheme, 162.

Vining, John, Representative from Delaware, and Assumption, 61; Maclay on, 61.

Virginia Resolutions, written by James Madison, and introduced in Legislature by John Taylor of Caroline, 409; contemporary opinions of, 409-11.

Wadsworth, Jeremiah, Representative from Connecticut, speculator in certificates, 47 n.; sneers at soldiers of Revolution, 55, 56; elected director of Bank of United States, 90.

Warville, Brissot de, and Mrs. Bingham, 128, 129.

Washington, George, reception on arrival in New York, 6, 7;

inaugurated President, 7; bored by dignities and ceremonial of office, 16, 17; his solemn dinners, 18; presents in person treaty with Southern Indians for ratification by Senate, 20;

annoyed by proposal to refer treaty to committee, 21; rents house of Robert Morris in Philadelphia, 119; endeavors, unsuccessfully, to effect reconciliation between Jefferson and Hamilton, 171; Hamilton refuses to discontinue attacks in Fenno’s Gazette, 172; and the French Revolution, 214; issues Neutrality Proclamation, 216; and Jefferson in the case of the Little Sarah, 228; reluctantly accepts Jefferson’s resignation, 233, 234; appoints Jay special envoy to Great Britain, 247; attacks Democratic Societies in Message, 261; delays signing Jay Treaty, 285; his prestige used to make Treaty more acceptable, 286; is attacked by Democratic press, 286-88; refuses to comply with request of House for papers pertaining to Jay Treaty, 298; refuses to be a candidate for a third term, 308; accepts chief command of army in prospective war with France, 413; selects Hamilton, Pinckney, and Knox as major-generals, 413.

Washington City, new capital, in 1800, 486-89; ‘city of magnificent distances,’ but mud roads, 487.

Whiskey Boys, the. See Whiskey Insurrection.

Whiskey Insurrection, the, 250-56; grew out of enforcement of Excise Law, 251; Hamilton active in suppressing, 254-56; ringleaders arrested, harshly treated, and jailed, 255; most of prisoners acquitted on trial, 255; two convicted, but pardoned by Washington, 256; tempest in a teapot, 256.

Williamson’s Gardens, New York City, 10.

Willing, Thomas, business partner of Robert Morris, elected director of Bank of United States, 90.

Wingate, Paine, on Federal Hall, 2.

Witherspoon, John, president of Princeton, 157.

Wolcott, Mary Ann, sister of Oliver Wolcott, afterward Mrs. Chauncey Goodrich, 134.

Wolcott, Oliver, of Connecticut, on Hamilton’s religious views, 41; mouthpiece for Hamilton, 59, 60; on Philadelphians, 116; on demonstrations against Jay Treaty, 275; Adams’s Secretary of the Treasury, sketch of, 331-34.

Wolcott, Mrs. Oliver, called ‘the magnificent,’ 134.

Wythe, George, Virginia lawyer and politician, 96; presides at meeting in Richmond denouncing Jay Treaty, 282.

X Y Z papers, Federalists familiar with, before publication, 364; Hamilton sees trump card in them for war party, 364; Jeffersonians kept in ignorance, 364; excitement intense on publication, 365, 366; ‘millions for defense, but not one cent for tribute,’ a clarion call, 366; rioting in Philadelphia, 367.

Yellow Cat, the, Philadelphia tavern, 120.

Yellow fever, in Philadelphia, 237, 238; in New York, Boston, and Philadelphia, 380.

FOOTNOTES:

[1] Pickering (Wingate to Pickering), II, 447

[2] Ames, I, 31.

[3] Writings, I, 450.

[4] Ames, I, 31, 32

[5] Pickering (Wingate to Pickering), II, 447.

[6] Ames, I, 31; Pickering, II, 447

[7] Republican Court, 120-22; Story of a Street, 101.

[8] Ames, I, 32-34.

[9] Writings, I, 450

[10] Ames (to Minot), I, 41-42.

[11] Republican Court, 122, note.

[12] Adams’s explanation, Works, VIII, 511-13.

[13] Maclay, 2-3.

[14] Maclay, 7-10

[15] Ibid., 22-24.

[16] Ibid., 25-27.

[17] Maclay, 37

[18] Writings, I, 470-71.

[19] Ames, I, 46.

[20] June 3, 1789.

[21] Maclay, 31.

[22] Daily Advertiser, April 24, 1789

[23] Ibid.

[24] Story of a Street, 221.

[25] Maclay, 7-10

[26] Ibid.

[27] Gazette of the United States, May 2, 1789

[28] Ibid.

[29] Ibid, May 8, 1789.

[30] Daily Advertiser, May 8, 1789

[31] Daily Advertiser, May 8, 1729.

[32] Gazette of the United States, May 9, 1789

[33] Governor Page complained bitterly of hogs and mud. Memorial History, III, 48.

[34] The Daily Advertiser advertises the specifications April 13, 1789.

[35] Maclay, 90

[36] Gazette of the United States, June 27, 1789.

[37] Memorial History, III, 47

[38] Daily Advertiser, March 6, 1789.

[39] Memorial History, III, 45.

[40] Daily Advertiser, April 15, 1789

[41] New York in 1789, 117.

[42] Memorial History, III, 65; New York in 1789, 117-20.

[43] New York in 1789, 172-75.

[44] Ibid., 176.

[45] Ibid , 178

[46] May 9, 1789.

[47] Gazette of the United States, May 13, 1789.

[48] Maclay, 31

[49] Gazette of the United States, June 6, 1789.

[50] Ibid , September 19, 1789.

[51] Story of a Street, 112.

[52] Gibbs, I, 22.

[53] Ibid , I, 43

[54] New York in 1789, 19.

[55] Ibid., 119.

[56] Warville, 96-97

[57] Republican Court, 210, note.

[58] Brooks, Knox, 217-18

[59] Mrs. Iredell; McRee, Iredell, II, 296-97.

[60] Gazette of the United States, May 16, 1789.

[61] Ibid , May 30, 1789

[62] Daily Advertiser, June 19, 1789.

[63] Gazette of the United States, April 15, 1789.

[64] Maclay, 257-58

[65] Wharton, Salons, Colonial and Republican, 53.

[66] Maclay, 266.

[67] Ibid , 73-74

[68] Story of a Street, 112, 114-17, 121.

[69] Richmond Hill, at present site of Charlton and Varick Streets.

[70] Letters of Mrs Adams (to Mrs. Shaw), II, 201; (to Thomas Brand-Hollis), II, 205.

[71] Ames (to Minot), I, 34; Maclay, 375; Familiar Letters, 86-89.

[72] Adams, Works, VIII, 491-92

[73] Thayer’s Washington, 180-81.

[74] Gazette of the United States, May 6, 1789.

[75] Republican Court, 149, note.

[76] Autobiography, Ford, I, 171.

[77] Maclay, 138

[78] Iredell, II, 138.

[79] Maclay, 138.

[80] Ibid , 138, 206

[81] Ibid., 101.

[82] Maclay, 38.

[83] Ibid , 50.

[84] Bassett, The Federalist System.

[85] Gerry, Annals, May 20, 1789

[86] Writings (to Randolph), I, 471-73.

[87] Jackson, Annals, I, 486-89.

[88] Page, Annals, I, 548-52

[89] Maclay, 128-31.

[90] Iredell (Lowther to Iredell), II, 258-59

[91] Writings, I, 471-73.

[92] Warville, 102.

[93] Familiar Letters, 236-37

[94] Oliver, 114.

[95] Gibbs, I, 22.

[96] Autobiography, 278

[97] Morris, Diary, II, 456.

[98] Oliver, 15.

[99] See Appendix, Lodge, Alexander Hamilton

[100] Works, IX, 405-06; letter to brother.

[101] Ibid , X, 109.

[102] Intimate Life, 3.

[103] Life, by son, I, 4.

[104] Fiske, I, 104-05

[105] Life, by son, I, 10.

[106] Ibid., 22.

[107] Ibid , 263-74

[108] Payne’s Journalism, 191-92.

[109] Works, I, 202

[110] Ibid., I, 213-39.

[111] Ibid., I, 243-87.

[112] Life, by son, II, 277

[113] Ibid., I, 69.

[114] Works, VI, 276.

[115] Life, by son, I, 69.

[116] Ibid., I, 318.

[117] Ibid

[118] Lodge, 26.

[119] Oliver, 27.

[120] Intimate Life, 47

[121] Oliver, 161-62.

[122] Lodge, 177-78; Oliver, 163-64.

[123] Oliver, 86.

[124] Ibid., 263.

[125] Ibid , 376

[126] Works, VI, 457.

[127] Oliver, 149.

[128] Fiske, 120; Lodge, 58.

[129] Beck, 75

[130] Oliver, 156.

[131] Works, I, 347-69.

[132] Beck, 76

[133] Life, by son, II, 487.

[134] Ibid , 487.

[135] Ibid , 488.

[136] Ibid.

[137] Ibid

[138] Ibid., 516.

[139] Lodge, 60.

[140] Works, I, 404

[141] Gordy, I, 70.

[142] Works, I, 417

[143] Ibid.

[144] Works, I, 420.

[145] Lodge, 62-63

[146] Statement to Tench Coxe quoted by Jefferson, Works of Jefferson, Ford, I, 338.

[147] Letter to G. Morris, Works, X, 425.

[148] Morris, Diary, II, 456.

[149] Works, X, 480.

[150] Intimate Life, 75

[151] Life, by son, I, 398.

[152] Parton’s Jefferson, 358.

[153] Familiar Letters, 236-37

[154] Oliver, 177-78.

[155] Works, X, 3; letter to King

[156] Jefferson’s Anas, I, 180.

[157] Morris, Diary, II, 456.

[158] Lodge, 156

[159] Works, X, 354.

[160] Morris, Diary, II, 456.

[161] Cabot, 298-300

[162] Intimate Life, 48.

[163] Life, by son, I, 236.

[164] Ibid , 233

[165] Lodge, 81.

[166] Ibid , 144.

[167] Oliver, 40.

[168] Works, X, 90-91.

[169] Ibid , X, 425-26

[170] Works, X, 123-26; letter to Lloyd.

[171] Parton’s Jefferson, 355.

[172] Intimate Life, 46

[173] Works, IX, 256-58.

[174] Familiar Letters, 236-37

[175] Morison’s Otis (to Mrs. Otis), I, 141-43.

[176] Cabot, 204-05.

[177] Morison’s Otis, I, 141

[178] Lodge, 272.

[179] Oliver, 76.

[180] Ibid , 381.

[181] Griswold, 173.

[182] Intimate Life, 55

[183] Ibid., 56.

[184] Ibid., 60.

[185] Ibid , 259

[186] Ibid., 73.

[187] Intimate Life, 17.

[188] Works, V, 61 (to Washington); X, 256 (to William Smith); X, 275 (to King); X, 343 (to Pickering).

[189] Life, by son, reminiscences of Troup, I, 10.

[190] Ibid

[191] Works, VI, 276.

[192] Ibid., X, 432-37.

[193] Intimate Life, 334

[194] Ibid., 406.

[195] Oliver and Sumner.

[196] Intimate Life, 261

[197] Works, IX, 232-37.

[198] Ibid , X, 356-57.

[199] Daily Advertiser, October 9, 1789.

[200] Gerry and Clymer, both supporters of the Report, objected. Annals, January 9, 1790.

[201] Maclay, 177

[202] Writings, J. Q. Adams, I, 49.

[203] Connecticut Gazette, February 19, 1790.

[204] Lodge, 90-91.

[205] Ibid.

[206] Madison’s Writings (letter to Pendleton), I, 507-09

[207] Maclay, 179. The member of Congress who sent the vessels was Jeremiah Wadsworth of Connecticut.

[208] Professor C. A. Beard makes a conclusive case against both in his Economic Origins of Jeffersonian Democracy.

[209] Works of Jefferson, I, 354.

[210] Mr. Amory, H. G. Otis, and William Wetmore.

[211] Writings of J Q Adams, I, 56-59

[212] Maclay, 177-78.

[213] Beard’s Economic Interpretation, 104-12.

[214] Gazette of the United States, ‘Common Sense,’ January 30, 1790

[215] Annals, January 28, 1790.

[216] Ibid

[217] Maclay, February 1, 1790.

[218] Maclay, 194.

[219] Annals, February 10, 1790

[220] New York Daily Advertiser, February 13, 1790.

[221] Familiar Letters, 108.

[222] Gazette of the United States, April 15, 1790

[223] Fiske, 187.

[224] Ames (letter to Minor), I, 35

[225] First Forty Years of American Society, Family Letters of Mrs. Margaret Bayard Smith, 61.

[226] Works of Jefferson, Ford, I, 86.

[227] Mrs Smith, 63

[228] Annals, February 11, 1790.

[229] Madison’s Writings, I, 507

[230] Annals, February 15, 1790.

[231] Writings (to Randolph), I, 512.

[232] White, Annals, February 16, 1790

[233] White, Annals, February 16, 1790.

[234] Maclay, 199.

[235] Ibid , February 22, 1790.

[236] Writings, J. Q. Adams, I, 49.

[237] Gazette of the United States, June 12, 1790

[238] Centinel, February 24, 1790.

[239] Ibid., March 20, 1790.

[240] Pennsylvania Gazette, copied in Maryland Gazette, February 26, 1790

[241] Boston, Independent Chronicle, March 4, 1790.

[242] Boston, Independent Chronicle, March 25, 1790.

[243] Ibid , April 15.

[244] Maclay, 202.

[245] Ibid , 205

[246] New York Advertiser, February 20, 1790.

[247] Ibid., February 22, 1790.

[248] Comptroller of the Treasury

[249] Gibbs, I, 43.

[250] Madison’s Writings (to Jefferson), I, 511

[251] McRee, Iredell (from Senator Johnson), II, 286; (from William R. Davie), II, 281, note.

[252] King, I, 385.

[253] Henry, II, 459

[254] Stone of Maryland.

[255] Maclay, 203

[256] Ibid., 209.

[257] Ibid., 212.

[258] Ibid , 214

[259] Maclay, 227, 230.

[260] Ibid , 234.

[261] Elias Boudinot of New Jersey.

[262] Maclay, 237.

[263] Maclay, 248

[264] Ibid., 250.

[265] Writings, I, 517.

[266] McRee, Iredell, II, 286

[267] Lodge, Cabot, 35-36.

[268] Ibid (to Goodhue), 37

[269] Gazette of the United States, April 21, 1790.

[270] Ibid., April 24, 1790.

[271] Centinel, June 19, 1790

[272] Daily Advertiser, March 24, 1790.

[273] Ames (to Dwight), I, 79-80.

[274] Maclay, 292.

[275] Ibid., 299.

[276] Maclay, 310

[277] Works, Ford, VIII, 42-45.

[278] Ibid., VIII, 52.

[279] Writings (to Monroe), I, 522

[280] Maclay, 332.

[281] Gazette of the United States, August 25, 1790.

[282] February 25, 1791.

[283] Brooks, Knox, 213.

[284] Maryland Journal, February 11, 1791

[285] Josiah Parker.

[286] Annals, January 5, 1791.

[287] Samuel Livermore.

[288] Annals, January 6, 1791

[289] Annals, January 11, 1791.

[290] Maclay, 385.

[291] Ibid , 385

[292] Maclay, 387.

[293] Jefferson’s Works, VIII, 123.

[294] Works, III, 319-41; 342-87.

[295] Ibid., 388-443.

[296] Maclay, 364

[297] Ibid., 369.

[298] Annals, February 2, 1791.

[299] Ames (to Dwight), I, 94

[300] Annals, February 3, 1791.

[301] Jefferson’s Works, III, 145-53

[302] Madison’s Writings, III, 171.

[303] Madison’s Writings, III, 171.

[304] Ames (to Minot), February 17, 1791

[305] Madison’s Writings (to Jefferson), I, 534-35.

[306] Hamilton’s Works (letter to Carrington), IX, 513-35.

[307] Parton, II, 1.

[308] Dustin’s Freneau, 160.

[309] May 11, 1791

[310] Gazette of the United States, April 6, 1791.

[311] Daily Advertiser, February 25, 1791.

[312] Independent Chronicle, March 10, 1791

[313] New York Daily Advertiser, July 19, 1791.

[314] British Agent.

[315] Domestic Life, 197-98. Jefferson was living in the country.

[316] Maryland Journal, March 22, 1791.

[317] Domestic Life, 199

[318] Ibid., 201.

[319] Jefferson’s Works, VIII, 205.

[320] Gay’s Madison.

[321] Madison’s Writings, I, 534

[322] Graydon, 375.

[323] McRee, Iredell, II, 335.

[324] Adams, Adams, I, 454

[325] New York Daily Advertiser, July 8, 1791.

[326] Ibid , July 9, 1791.

[327] Ibid , July 14, 1791.

[328] Independent Chronicle, June 23, 1791.

[329] Ibid , July 7, 1791

[330] Ibid., August 26, 1791.

[331] Ibid.

[332] Jefferson’s Works, VIII, 192

[333] Adams, Works, VIII, 503.

[334] Ibid , 505

[335] Madison’s Writings, I, 535.

[336] Jefferson’s Works, VIII, 223.

[337] Jefferson’s Works, VIII, 232

[338] Madison’s Writings, I, 540.

[339] Ibid., I, 534.

[340] Madison’s Writings, I, 538.

[341] Maryland Journal, February 15, 1791.

[342] Pennsylvania Gazette, September 7, 1791

[343] August 17, 1791.

[344] Hamilton’s Works (to King), I, 402.

[345] August 8, 1791

[346] August 9, 1791.

[347] August 13, 1791. ‘Scrips sold last night: Cash 212-202-210-206; 10 days, 216, 217-1/2, 214; 30 days, 223, 212, 215; 45 days, 216; 60 days, 219; Sept 10, 224; Deliver and pay December 1, 235; Deliver October 1 and pay January 1, 242; Monday next, 207; Tuesday, 215-1/2, 217, 210 ’ (New York Daily Advertiser )

[348] Daily Advertiser, August 15, 1791.

[349] New York Daily Advertiser.

[350] Daily Advertiser, August 17, 1791

[351] New York Daily Advertiser, September 21, 1791.

[352] Independent Chronicle, September 1, 1791

[353] Independent Chronicle, August 18, 1791.

[354] Maclay, 272.

[355] Familiar Letters, 148

[356] Maclay, 272.

[357] Mrs. Smith, 6.

[358] Ibid , 6-7.

[359] Liancourt, III, 157.

[360] Parton on the Moore incident, III, 115-19

[361] Maclay, 272.

[362] Mrs. Smith, 6-7.

[363] Maclay, 272

[364] Familiar Letters, 149.

[365] Familiar Letters, 148

[366] Maclay, 272.

[367] Liancourt, III, 157.

[368] Familiar Letters, 148

[369] Mrs. Smith, 6-7.

[370] Randall, I, 14.

[371] Dodd, Statesmen of the Old South, 3-4.

[372] Ibid., 9.

[373] Dodd, Statesmen of the Old South, 23

[374] Parton’s Jefferson, I, 27.

[375] Randall, III, 448.

[376] Autobiography, I, 77

[377] Fiske, 148.

[378] Works (to Mrs. Trist), V, 151.

[379] Ibid (to Bellini), V, 151.

[380] Ibid. (to Mrs. Trist), V, 81-82.

[381] Ibid (to Bellini), V, 151-54

[382] Morris, Diary, I, 101.

[383] Domestic Life (letter to Madison), 155; Works, I, 131-38.

[384] Domestic Life (letter to Adams), 156.

[385] Ibid (to Jay), 156

[386] Ibid. (to Jay), 159.

[387] Works (letter to Lafayette), VII, 370; (to De St. Etienne), VII, 370-72; (the Charter), VII, 372-74.

[388] Ibid , IV, 72

[389] Ibid. (to De Unger), IV, 138-39.

[390] Autobiography, I, 72

[391] Mrs. Wharton, 391.

[392] Parton’s Jefferson, I, 344.

[393] Vol I, 77

[394] Works, V, 3-4: letter to Chastellus.

[395] Ibid , VI, 428: to Warville.

[396] Randall, I, 17.

[397] Ibid., III, 556-58; letter to Rush.

[398] Ibid , 671-76

[399] Ibid.; also see The Thomas Jefferson Bible, edited by Henry Jackson.

[400] Randall, III, 547.

[401] Dodd, Statesmen of the Old South, 36

[402] Randall, III, 620-22.

[403] Works, VI, 11-15; to Charles Thompson.

[404] Ibid , 227-29 (to Edward Carrington); 269-71 (to J. Blair).

[405] Ibid., 296-301 (to Benjamin Hawkins and George Wythe); 231-32 (to Count Del Vermi).

[406] Ibid , 285-89; to John Adams

[407] Ibid., 368.

[408] Ibid., 378-83; to William Carmichael.

[409] Works, VI, 385-93.

[410] Ibid., 425-27. I have the authority of Josephus Daniels for a tradition in North Carolina that such a letter in the hands of Willie Jones was responsible for the failure of the first Convention there to ratify. The letter is apparently lost.

[411] Ibid , VII, 26-30; to Carmichael

[412] Ibid., 36-39; to Colonel Carrington.

[413] Ibid , 79-88.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.