An a priori Path to Super-Intelligence

Page 1

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

1

An A Priori Path to Super-Intelligence Marcus Abundis Bön Informatics, Aarau, Switzerland This paper details structural fundaments in information, and then in intelligence, as a way of positing a Super-Intelligence (SI) approach. It explores the task of framing ‘general adaptive logic’ from a low-order (non-anthropic) core — to arrive at a scalable, least-ambiguous and most-general, computationally-generative continuum. This study names four minimal steps needed to develop SI. In the first of those four steps, Shannon Signal Entropy is deconstructed in a priori terms to detail the Signal Literacy needed to support extensible ‘informational intelligence’. A dualist-triune informational continuum is therein posited. Lastly, three remaining steps are briefly explored in the growth of informational intelligence toward SI, with additional detail referenced in three related papers (6,300 words [5,500 w/o], 9 pages, rev8/2018). Index Terms—information theory, artificial intelligence, logic, entropy, system of systems,

I. INTRODUCTION: F RAMING AN a priori V ISTA

T

O infer an a priori Super-Intelligence has three parts. ‘A priori’ suggests most-primitive aspects imagined to underpin intelligence. It implies the study of non-intelligent and non-informational roles, and naming ‘first principles’ for a scalable proto-intelligent course. Conversely, SuperIntelligence points to most-advanced facets that surpass common imagination (un-nameable things). It evokes Arthur C. Clarke’s third law of ‘sufficiently advanced technology indistinguishable from magic’, borders on the fantastic, and is innately creative. An ensuing ‘scientific project’ would seek to join those imagined and unimaginable realms in some empiric manner, for a practical middle ground. As a scientific notion, skepticism is the expected reaction to such a project. Forthwith, hurdles arise that make the project impractical. Gaps in-and-between the standard model of physics, quantum mechanics, biology, genomics, evolutionary theory, psychology, astrophysics, and so on, convey cognitive voids among the presumably-linked ‘targets’. Science precludes leaping over such voids. Also, science requires measurable and repeatable roles with ‘controlled variables’, typical to closed/isolated systems. But a Super-Intelligence must abide open systems with chaotic variables, as do humans, in an equal-or-better ‘creative manner’. As such, from hereon in I refer to Super-Intelligence as HLAI (human level artificial intelligence), its logical precursor. Still, the above (and other) issues suggest that science per se cannot directly support a project like HLAI. Despite plain scientific hurdles contra HLAI, science is also typified by an enduring need to reinvent science — or ‘science’ as currently grasped. Hence, HLAI cannot be dismissed outright, but this still leaves us with a question of ‘What way forward?’ To ask and posit ‘What way forward?’ evokes philosophy. But now, philosophic hurdles arise. First, the project calls for a reductive analytic base (a priori), but ends with an integrated phenomenology (structured first-person experience), marking often-opposed schools of thought. Second, that ‘a Manuscript received April 4, 2018; revised mm dd, yyyy. Corresponding author: M. Abundis (email: 55mrcs@gmail.com).

priori phenomenology’ requires coeval reconciliation of ontological (origin), epistemic (meaning), objective (material), subjective (relational), and heuristic (learning) roles [1], [2]. To date, work on this front presents a large patchwork of vague, partial, controversial, and opposed views. Third, each such five-part philosophic reconciliation conveys only one prospect. Just as the ‘logic of life’ has varied outputs (diversity), a wide tableau of ‘thinking reconciliations’ for each life form is apparent [3]–[5]. That diversity requires an HLAI that is ‘intelligent about intelligence’ and ‘thoughtful about thinking’, not merely a thinking machine. Fourth, the above select-able diversity evokes Evolution by Natural Selection (EvNS), a key unfinished scientific theory [6], [7]. Lastly, that diversity also marks a need to ‘think like nature’, by framing an equally wide tableau of ‘general adaptations’ (as BIOS) in diverse agents. Each item listed denotes a major philosophic/logical challenge. Thus, onerous scientific and philosophic hurdles inhibit ‘a way forward’ for HLAI, except for one likely path. The prior diversity, taken as what we know of nature and how nature ‘thinks’, marks that path. For example, at some point humanity may amass enough wherewithal such that we hold knowledgeabout-knowledge, or a meta-meta-perspective. Conversely, knowledge-about-data marks a meta-perspective or metadata. The standard model of physics and the periodic table are types of metadata or collected wisdom on special topics (‘science’). But data that join the standard model with the periodic table, and with genomics, and so on, imply a meta-meta perspective, or a ‘Super-data’ as natural wisdom that bridges ‘onerous hurdles’. Meta-meta implies a core informational pattern exists across diverse (meta) roles. Biologist Gregory Bateson [8] saw this as a logical necessity, a ‘necessary unity’ and a ‘pattern that connects’ the cosmos, while others hope to ‘mine a computational universe’ [9]. Naming a core informational pattern (a ‘structural fundament’ of HLAI) is this paper’s goal. In summary, a human sense of information and intelligence surpasses basic tables and the like, to where core patterns (meta-meta roles) is what we now target via HLAI. A ‘converged approach’ takes shared-but-diverse scientific roles as a philosophic base. That trans-disciplinary vista holds ‘coeval a priori functioning’ as a type of scalable-and-selectable


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

natural informatics — for an HLAI core. Also, its ‘functional phenomenology’, evident as material survival, underpins all ensuing abstract notions of information and intelligence, as they are sustained by that survival. Before I develop this natural informatics, I offer more detail on the current stateof-affairs in advancing HLAI. II. REVIEW OF LITERATURE: F INDING THE R IGHT Q UESTIONS In seeking a way forward for HLAI, some readers see an urgent ‘safety issue’. The advent of any new tool, especially one as likely-formidable as HLAI, implies new existential risks — ‘summoning the devil’ per Elon Musk and others in the daily press. That risk plainly merits study. But detailing what underpins HLAI helps to clarify that sense of risk better than does unbounded speculation. I thus focus on HLAI fundaments rather than safety, and ultimately anticipate a non-autonomous or ‘reliant on human initiative’ HLAI (covered later), where meeting human-related needs remains the central aim [10]. Next, in approaching HLAI, a key conceptual hurdle is noted by various individuals, across diverse disciplines, as they each confront their own challenges: • ‘solving intelligence’, Demiss Hassabis, Google Deep Mind [11], • ‘de-risking science’, Ed Boyden [12], MIT Media Lab, neurology, • ‘meaning as fundamental’, Brian Josephson [13], Cambridge University, physics, • ‘do submarines swim?’ Edgser Dijkstra [14], Eindhoven University, computer science, • ‘symbol grounding problem’, Stevan Harnad [15], Université du Québec, cognitive science, • ‘theory of meaning’, Shannon and Weaver [1], information theory, and more. These varied-but-related views retell the missing five-part philosophic reconciliation, logical first principles, and metameta vista noted above. This ‘central dilemma’ also reflects gaps in what we see as data, knowledge, and wisdom, an issue of growing import in our ever-more informational roles [16], [17]. Each individual or discipline has its own view of the matter, but these nominally-diverse logical gaps can be seen as, and reduced to, one key informational lapse. Shannon and Weaver were likely first to name this lapse as a missing ‘theory of meaning’, but it has held many roles over time, with a few named above. Deeper study of the issue marks core differences in how we regard subjective information (qualia, or raw sense data), and objective (quantitative) information, where even basic notions of information become a confused dualist concept [1]. Neither side is ever detailed in relation to the other, where they remain un-reconciled, and drive the cognitive quagmire [18] we have today. As just one example of that un-reconciled-ness, mathematics may be seen as being ‘purely objective’, omitting subjective roles from its arguments as an intellectual ideal (theoretical mathematics). But mathematics without subjective elemental facts, or primitive ‘initial conditions’, is a fact-free science of little practical use [19], [20] . Only if subjective (S) and

2

objective (O) facets are joined do predictive roles arise as ‘functionally reconciled’ applied mathematics. If we look for other firm (O)bjective views, the standard model of physics and the periodic table are good candidates. But their ratherrecent ‘objective success’ ignores that they arose from a line of subjective elemental observations normalized (functionally reconciled) via experiment and peer review. Only after enough material ‘evidence’ was subjectively discerned and subjectively named, by varied individuals, using diverse experiments, were those models then posited and accepted as being innately objective. Objectified-subject (O-S: functionally reconciled) views like the standard model and the periodic table are now so pervasive that we may forget how they arose as subjective notions. Object traits cannot even be implied if not first subjectively sensed, ‘discovered’ or ‘imagined’ by someone, before attempting a ‘sense-making’ hypothesis. But (S)ubject(O)bject models that ideally depict useful O-S functions are absent. Likewise, if we now target a naturally intelligent HLAI, a (S)ubjective base [21] is needed to underpin later (O)bjective aims. Hence, Shannon and Weaver’s absent ‘theory of meaning’ is herein labeled S-O modeling, to formally name this ‘central dilemma’ and to displace a mix of earlier partial views. Also, future S-O information integration, as a de facto way forward, will presumably amass as HLAI, at some point, leaving us to ask ‘When and how will that eventual HLAI arise?’ Before framing a when-and-how for HLAI, I further explore the literature on this front. The need for S-O modeling drives diverse-but-related views, as already noted. On a philosophic level, a most basic view is materialism, claiming that if ‘a thing’ is not objectively defined/verified it does not exist, seeking to remove subjectivity fully from consideration [22]. But this view exhibits subjective naı̈veté [23], [24]. Philosopher Daniel Dennett [25] is a likely standard-bearer here [26]. He argues qualia (raw sense data) are unreal — ignoring an agent’s real need for functionally differentiating sensoria in EvNS. He also asserts that uniqueness in qualia is illusory — ignoring the need for diversity as select-able options in EvNS. For example, perfectly-cloned agents cannot occupy the same space-time position without collapsing to ‘one unit’, therein driving some manner of perspectival uniqueness in agents, if only as happen-stance (Fig. 1). Similarly, identic hard drives (HDD ‘clones’) installed on varied computers, across diverse roles, often hold unique data sets, with HDDs seen as offering unique-but-general informational ‘identities’.

Fig. 1. Space-Time and Perspectival Uniqueness. Two identic agents in subtly varied space-time positions (left) show Clone 2 with an advantage, as Clone 1’s view is partly blocked. Similarly, the right side shows parallax (subtle space-time differences) in binocular vision that yields ‘depth perception’ for more space-time acuity. Space-time skill is central to EvNS as a selected trait that largely defines ‘agency’.


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

As an alternative to materialism, philosopher David Chalmers [27] claims qualia are beyond all scientific/objective views, often alluding to an ‘information theory’ solution (psychophysical laws), but with no detail ever offered. Chalmers supports this anti-scientific claim using fictional scenarios as thought experiments (zombies, etc.), but that make his views more like science fiction rather than firm critical thought [28]. After Chalmers, others sense a mystical panpsychism where everything is said to be ‘subjectively conscious’, with evolutionary biologist Stuart Kauffman [29], [30] and neurologist Christof Koch [31] as recent converts. Lastly, there is ‘mysterianism’ where some seem to throw their hands up and claim no solution is ever likely due to humanity’s innate cognitive limits [32]. These, and many other views, herein-unnamed, offer seemingly endless debate, but little more. In contrast to endless debate, gains from Shannon’s [33] Signal Entropy still underpin decades of information technology (IT) leaps — in (O)bject roles. This is seen in processor speed gains as Moore’s Law, HDD capacity growth beyond Moore’s Law [34], new data-compression and error-correction schemes, and the like. Those objective gains, in contrast to endless debate, put S-O modeling in a poor light. But Shannon and Weaver [1] still saw Signal Entropy as ‘disappointing and bizarre’, in part due to a missing theory of meaning. Thus, later informational studies still convey a ‘conceptual labyrinth’ of unresolved subjective/semantic issues, even if using a Shannon-based foundation [35]. Despite persistent S-O issues, recent gains in unsupervised machine learning convey new optimism in advancing HLAI [36]–[38]. But those gains entail ‘catastrophic forgetting’ [39], where S-O lapses still obscure detail on the reasons for that success. Also, the functioning of those AI (artificial intelligence) models is rather limited or narrow, despite claims of ‘verging on generality’ [40]. As such, calls for new HLAI thinking arise, while debate continues on many fronts. For example, AI pioneer Geoffrey Hinton wants to ‘throw it all away and start again’ [41], Yann LeCun and Gary Marcus [42] variably agree and argue on the ‘innateness’ of intelligence, commonsense, and machine learning, Alison Gopnik [43] wishes to regard AI as child-like, and more. But none of these views offer new ideas. They instead revisit already-known issues, now ‘re-discovered’ for their own interests (as earlier). Thus, HLAI efforts presently remain as fringe projects [44], [45], due to a lack of true advances in S-O modeling. But even HLAI fringe views must be assessed. Foremost are OpenCog and the Artificial General Intelligence (AGI) conferences hosted by mathematician Ben Goertzel. Here, over ten years of effort shows little advance beyond simply expanding the opportunities for AGI study [46]. Also, most AGI work uses a probabilistic approach, echoing more-successful narrow AI models. But EvNS (functional Fit-ness, targeted cause-andeffect) is the standard HLAI must endure, not ‘evolution by natural probability’ — making probabilistic models (alone) an unlikely solution. Probabilities may typify ‘large-number’ aspects of EvNS’s functional diversity, branching events, and the like, but they cannot detail the production and Fit-ness of targeted functions. ‘Generative functional differentiation’ (innate creativity) is thus central to HLAI. Also, AGI often

3

aims to emulate neurological maps of the human mind [46], [47], where no cohesive maps exist, nor is one likely to arise soon [48]. Computer scientist Edgser Dijkstra [14] saw such ‘mind models’ as flawed thinking, asking ‘Do submarines swim?’ to stress the value of first principles in developing HLAI models. His view is occasionally cited as pointing to an a priori vista that few seem to grasp, where Dijkstra’s challenge remains unmet. Thus, a priori HLAI proposals are rarely sought or seen within AI/AGI circles [49] (email exchange, Joscha Bach, November 2014). The Threats to Computing Science: Edsger W. Dijkstra, 1984. The Fathers of the field had been pretty confusing: John von Neumann speculated about computers and the human brain in analogies sufficiently wild to be worthy of a medieval thinker and Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know is about as relevant as the question of whether Submarines Can Swim. EWD 898 Enduring S-O modeling gaps mean that a priori notions lie at the fringe of AI/AGI fringes. But here, philosopher John Searle [50] was likely first to name ‘biological naturalism’ as a useful approach, with base ontological and epistemic facets framed in subjective and objective roles [2]. Searle’s naturalism retells the earlier noted five-part reconciliation (sans heuristics), but again, a detailed view is never given. For example, he asserts humans may process qualia, but other biological systems (like a tree) do not, without saying why those systems should differ (personal exchange, 30 April 2014). Next, as a small advance, philosopher Luciano Floridi [35] offers a General Definition of Information (GDI) that partly details semantic roles. But questions remain on GDI’s completeness [51]. Later, astrophysicist Sean Carroll [52] attempts a general reconciliation by assembling notable intellectuals from diverse disciplines for a focused ‘naturalism’ discourse, but with no conclusive result. Beyond these examples lie other unfinished-but-intriguing views: Marcus Hutter’s [53] Universal Artificial Intelligence top-down model (AIXI) seems highly abstract and noncomputable [54]. Pei Wang’s [55] NARS (non-axiomatic reasoning system) takes a bottom-up experience grounded view, but with a vague sense of ‘information’. Peter Cheeseman [56] offers a mid-level generative engine (recursively selfimproving AI), but with no detail on the primitive elements used as input (no ‘initial conditions’). Lastly, and perhaps most intriguing, Stephen Wolfram [9] posits a bottom-up ‘computational irreducibility’ where cosmic ‘primitives’ drive sense-making roles that compute all information. New versions of Wolfram Language (symbolic discourse language) mark an early effort at this style of S-O modeling. Still, in each case, it remains too early to assess the efficacy of these models . . . as other new HLAI projects continue to arise [57]. The strongest hint to date of a likely a priori vista with first principles comes from neuro-anthropologist Terrence Deacon [58]. He uses a type of multi-state analysis to approach the ‘a priori challenge’ [18], referencing Claude Shannon’s signal


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

entropy, Boltzmann’s thermodynamic entropy, and Darwinian EvNS as complementary views. His Shannon-BoltzmannDarwin (SBzD) model offers a converged-scientific (metameta) view. But the model’s purely thermodynamic bias makes it irreconcilable with wider physics based models (email exchange, January 2017). Also, the work is littered with neologisms and impenetrable prose [18], [59], [60]. The model thus lacks breadth and clarity. Beyond Deacon’s work no other models are seen except for the view posited herein, which roughly tracks Deacon’s view but in a more-broadly reductive manner. Still, the strength of Deacon’s multi-state entropic analysis is that it stipulates a bottom-up approach (minimal explanatory gaps), is innately creative (generative functional differentiation), has a trans-disciplinary core (a must for general AI), and ties to Shannon Signal Entropy and Darwinian EvNS (firm foundations). Lastly, a Shannon-Darwin (multi-state) link implies a way to join ‘well-ordered’ statistics (Shannon, closed/isolated systems) and ‘chaotic’ open systems (Darwin), already noted as essential to advancing HLAI. In summary, as ‘a way forward’ most of the literature offers confused views with little insight, except in Deacon’s case. His SBzD model echoes the earlier notion of ‘converged science’ as a meta-meta perspective and an a priori base. Of the entailed SBzD facets, Shannon Signal Entropy is widely accepted, and Darwinian EvNS presently offers the best way to typify human-level Fit-ness in open systems. But Boltzmann’s thermodynamic laws and statistical mechanics rely on a partial/isolated system, so it is unclear how Deacon can use it to join innately-closed statistical signal entropy (Shannon) and innately-open functional EvNS (Darwin). Thus, as a next step, this paper posits an alternative Shannon-Bateson-Darwin (SBtD) model. The posited SBtD view evokes Bateson’s [8] sense of ‘necessary unity’ and a ‘pattern that connects’ the cosmos, by detailing multi-state ‘differentiated (entropic) differences’, while also noting Wolfram’s aim to ‘mine a computational universe’. It thus presents a model of ‘generative functional differentiation’ (or Generative Entropy) that joins Signal Entropy and EvNS. Detail on this SBtD approach is as follows: III. POSITED MODEL: NATURAL I NFORMATICS OR ‘ THINKING LIKE NATURE ’ The noted S-O modeling gaps mean that HLAI is unlikely to arise as a ‘one-step algorithm’. Even if we accept Bateson’s unified vision, diverse sciences must be joined and a five-part philosophic reconciliation must be given, plus S-O implies ‘two tracks’. Moreover, Shannon and Weaver saw three Levels (labeled A, B, and C) of needed analysis, and expected more to be named. Further, this study targets cause-andeffect, of information-and-intelligence, with distinct traits and shared roles. Also, an ‘eventual HLAI’ likely includes: 1) core functioning or ‘agency’ as an initial case, 2) functional variability as a generative case, 3) the mapping of that variability to causal possibilities, and 4) a record of selected effects as ‘aggregating serial increments’ (EvNS, logical depth). Lastly, traversing this simple-to-complex mix requires tracking: elemental traits, functional roles, and operative systems, with-

4

out mixing levels of abstraction, and without creating selfreferential (closed/paradoxical) loops. Each item listed holds distinct representational and computational tests, implying that some type of multi-state analysis is needed for HLAI. This paper plots a path through that S-O melange by asking four step-wise questions. The questions thus convey a type of serial logic as an unfolding of ever-more intelligent informational roles and systems. Those four questions are: • What is Information? (an a priori dualist-triune aspect) • What is Meaningful Information? (scale-able S-O and OS Fit-ness) • How is Adaptive Logic Generated? (modeling ‘Generative Entropy’) • How is Adaptivity Selectively Optimized? (given many adaptive options . . . ) As background, a concise account of these questions is shown in a video abstract: https://youtu.be/FsztGuSvv5U, with more detail given in three related papers (A PPENDIX). Next, the remainder of this paper examines the first of those four questions. A. Question One: What is Information? To ask ‘What is Information?’ in a priori terms is to ask ‘What comes before Information?’ Alternatively, one can ask ‘What causes information to arise?’. The paper now focuses on that first question: causal a priori explanations of information. A ‘causal view’ of information evokes a scientific sense of cause-and-effect, where ‘informational effects’ frame the second question: What is Meaningful Information? Causal views are explanatory, but named effects are descriptive in nature. Also, S-O means that one can focus on subject or object roles. Thus, to start, an (O)bjective causal view of information arises in Shannon’s ‘Mathematical Theory of Communication’ where: 1) Well-formed object Sets (Fig. 2) serve to construct ‘a message’: binary I, O; alphabetic a, b, c . . . z; decimal 0, 1, 2 . . . 9; etc. This marks Set differentiation, Batesonlike differentiated (O)bjects, key to creating messages. Without such (O)bject Sets, messaging is impossible. 2) In turn, a message is a Group of (S) Fit-ted (O)bjects. Thus, 126 and 621 are distinct-but-similar Groups, and CAT and ACT show like Group Fits, each denoting a specific value or ‘a message’. Also note here that, - Group (S) differentiation (Fig. 3) lies apart from, - Set (O) expansion (versus Fig. 2 differentiation). For example, if 126 and CAT hold original (O)s, 958 and RUG use new (O)s, as innate Set expansion; but, - S-and-O afford equally-new messages here, as Bateson-like Group S-O differentiation, effecting ‘base (S-O) dualism’ in the creation of new messages. 3) Next, Signal Entropy is the number of Groups possible in an S-O Volume (Fig. 3). For example, a binary Set ‘two-term Volume’ (22 ) has four unique S-O Fits (OO, OI, II, IO). An alphabetic three-term Volume has 263 ‘signs’. Lastly, the ASCII Set [61] used in IT has 128 seven-term binary (27 ) control characters, each as a select-able message. S-O Volume vari-ability marks


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

Scale-able differentiation (exponentially vari-able Signal Entropy) and affords more-complex messaging. 4) Lastly, any number of Scale-ably Grouped Objects convey intelligible/select-able messages of many types. This brings us to the point where Shannon starts his analysis where ‘a message selected at another point’ is sent across a channel. Select-able message differentiation is the first step in Shannon’s (Level A) entropic study.

Fig. 2. Object Sets: ‘Signal Literacy’ requires Well-Formed Sets with a given Fit (innate syntax, an O-S role) [35], or normalized (systemic) space-time roles [62], as above. But Un-Formed Sets require ‘informatic work’ (space-time/S-O sense-making of ‘noise’) as proto-literacy. Well-formed Sets thus hold an unexamined Fit assumption as part of any ‘number system’, alphabet, etc. [16]. Also, (O-S) ‘knowledge’ differs from noisy (S-O) exploration of ‘the nature of Fit-ness’. Here, one’s capacity for ambiguous meaning-full (S)emantic work varies, but is central to generatively initiating-and-expanding Signal Literacy. Lastly, the symbolic examples used here also apply to material roles.

Such an a priori study of Signal Entropy is rare as it examines an underlying ‘logarithmic base’ — in this case, Shannon’s assumed Fit. More often, IT models detail gains in HDD density, data encryption, processor speed, etc. Conversely, this reductive study marks a bottom-end for all such upwardlyscale-able views. It names core roles (Signal Literacy) needed for Signal Entropy to arise, with a ‘logical depth’ [63] of no apparent upward limit (symbols ⇒ words ⇒ sentences ⇒ books ⇒ Internet ⇒ etc.). Signal Literacy ‘incarnates’ Signal Entropy [16]. First, it shows base S-O dualism in three functional steps (Set, Group, and Scale-able differentiation); Signal Entropy shows only O-S roles as a later ‘statistical syntax’. Second, ‘statistical surprise’ in (O)bjects from Well-Formed Sets (Signal Entropy) is less than that in (O)bjects from Un-Formed Sets (‘noise’, infinite statistical surprise, Generic Entropy). But Fig. 2 shows both Sets with equal (O)s. Thus, the one true contrast between ‘signal’ and ‘noise’ is the nature of their innate (S) Fit-ness, shown by Signal Literacy. Conversely, Shannon’s study of (O)bject signals eschews (S)ubjective ‘noise’ and meaning-full Fits. Third, Fit-ness also drives Darwinism in a logically-Fitted functional continuum (↑↓Signal Entropy, logical depth). It marks a Shannon-Darwin link of Bateson-like ‘differentiatedentropic-differences’ — an SBtD model — and a ‘pattern that connects’, with dualist-triune Signal Literacy (a 2-3 Fit) at its core. A Signal Literacy analysis of Shannon’s logarithmic base shows that blindly accepting an assumed Fit, or emphasizing O over S roles (both Signal Entropy), hinders a precise grasp of signaling in HLAI — where HLAI is signal-and-noise based. Lastly, the symbolic (representational) examples used here do not directly name innate patterns in nature. Thus, further nature-based examples are needed to fortify this view, which is covered in question two ‘What is Meaningful Information?’.

5

A Mathematical Theory of Communication: Claude E. Shannon, 1948. The fundamental problem of communication is that of reproducing at one point . . . a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system [of functioning]. These semantic aspects . . . are irrelevant to the engineering problem [of communication, but are significant in being functionally] selected from a set of possible messages. (emphasis added)

Fig. 3. Select-able Volume. The CAT/ACT example is shown in all possible A, C, and T (O)bject Group Fits (S): a 33 S-O Volume, or Signal Entropy of 27 Fit-ted Groups. CAT and ACT are English words, but other S-O Groups mark ‘select-able options’. For example, TAC in French denotes a fencing touché, a rental payment in old British law, or ‘tag’ (‘you’re it’: English equivalent) in Old High German, and more, for an expansive Signal Literacy (or Generative Entropy). Here, each differentiated S-O Group marks a (S)elect-able message option that, in turn, underlie all manner of ‘scale-able linguistic logic’. Lastly, adding a fourth item ‘R’ to A, C, and T conveys Set expansion, as a 34 S-O Volume with a Signal Entropy of 81 Fit-ted Groups.

B. With this SBtD vista as a foundation, I next detail a structural core for Shannon signals: Two coeval roles are seen: (O)bject Sets, used in (S)ubject Groups, jointly compose select-able messages. • A ‘defect’ (irregularity) in either S-O aspect incites noise in those messages. • Further, S-and-O Vari-ability affords new messages. For example, CAT and ACT show a shift in relational (S) Fit, with no-new-terms; but CAR and RAT have a new ‘R’ term (O) — where all such Fits equally convey new messages. • S-O Vari-ability thus raises noise in a message, and new messages. Conversely, S-O In-Variance marks noise-free (non-adaptive) messages (CAT = CAT 6= Ca T 6= RAT). • In-Variant-and-Variant S-O-V (‘triune’) modeling is key to ‘adaptive logic’, creativity, and Generative Entropy, where noise ≈ novel signals: proto-informational disruptions. S-O in-Variance and Variance mark a paradox in HLAI. Shannon names scale-able statistics in simple-to-complex ‘messaged intelligence’, but is mute on ‘useful noise’, protosignals, or S-O exploits (Fig. 4) as another ‘intelligence’. Similarly, if some ‘information’ is initially given (common sense), ‘intelligence’ must contend with Variant ‘noise’ (not information alone) to realize advances. The paradox is that some individuals see Shannon signals (reliably objective ‘science’, O-S) as conveying intelligence, while others see chaotic sensibility (vaguely subjective ‘art’, S-O exploits) as a •


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

more intelligent course, where they jointly sustain EvNS and adaptivity. But Bateson-like differentiated-entropic-differences (Signal Literacy) allow us to consider ‘knowledge’ and ‘useful noise’, in a coeval role — if we suitably detail all S-O patterns-and-differences, avoid S-O/O-S confusion, and reject un-examined (S) Fit assumptions.

Fig. 4. Useful Noise. Machine learning reduces images to primitive elements (edges). Compared to the first image (left), edges convey a ‘type of noise’. The edges are recombined (re-Fit-ted) into new images, akin to the original (left to right). But, nothing here conveys adaptivity as no novel effects arise. Instead, a type of recombinant logic is shown. Recombination applies to adaptation, but true adaptivity requires functional analysis to detail a task’s utility (EvNS). This machine learning model has no such interpretive functional analysis.

Finally, to name ‘What causes Information?’ S-O adaptivity suggests a coeval ‘dualist-triune’ Fit: (S)ubject and (O)bject roles (base dualism), and (V)ari-ability in S-O roles, as an entwined S-O-V vista. This adaptive triune has innate structure [38], [42]: a 2-3 syntax. That 2-3 Fit arose earlier as: S-O, Set, Group, and Scale-able differentiation (Signal Literacy). Further, a natural dualist-triune occurs as classic Hegelian dialectics: thesis + anti-thesis = synthesis. Lastly, this syntax is innately meaning-full (semantic) as a universal informational core: an a priori, coeval, syntactic-and-semantic functional phenomenology — an adaptive informational continuum with innate ‘logical depth’ (. . . OV S V OV . . . ). This dualist-triune view supports Bateson’s unified intuition and a ‘pattern that connects’ the cosmos. It also displaces prior dualist notions of body-mind, matter-and-mind, etc., where a current leading concept is a dual-material aspect [64], [65], entailed here as base dualism. Lastly, triune traits show in many parts of the standard model of physics, marking evenfurther classic/scientific dualist-triune roles [66]. C. In contrast to the above reductive-adaptive analysis, a constructive-non-adaptive (bottom-up) view is also possible. Non-adaptivity typifies a ‘Cosmos-at-large’, with adaptation arising only ‘in-relation-to’ that Cosmos (niches, environs). A Cosmos does not ‘adapt to itself’ but it is informationally dynamic, none-the-less, where a non-adaptive dynamic prevails. Adaptive/non-adaptive dynamic differences are examined in ‘What is Meaningful Information?’.

6

Still, as a simplified constructive-non-adaptive (bottom-up) view of the Cosmos, consider: if the Big Bang presents a valid cosmology, we must also accept a Cosmos (≈ 200,000 years old) of pre-photonic plasma as an early ‘thermodynamic maximum’ or entropic symmetry. That Cosmos is so homogeneous no information exists, no Bateson-like differentiatedentropic-difference is possible. Instead, an unbounded-andundifferentiated vastness exists. Next, with time condensate starts to arise in that vastness (object ontology). As an a priori study, imagine that one such ‘condensed object’ arises. Does that ‘one object’ convey information? The object’s solitude leaves it unobserved (no ‘witness’ to verify its existence) and non-functional (no counterpart for functioning roles). The object’s place is ‘logically possible’, but its solitude inhibits anything informative about its existence or function. The object thus conveys ‘no information’ and instead stands anomalous and inconsequential within a vast nothingness. But, early Cosmic condensate is actually thought to hold many objects (protons, neutrons, etc.) erratically popping inand-out of existence. Do ‘many chaotic objects’ then convey information? Object existence (ontology) is now more-evident, but we cannot usefully differentiate the objects. Unbounded S-O Vari-ability (Pure Noise) inhibits any chance of ‘sensemaking signals’, as everything is too irregular. It marks a primitive scene far-removed from the sense-able roles we now see as interacting atoms, gas clouds, stars, solar winds, planets, gravity fields, etc. Later, only when at least two ‘regular’ (O)bjects meet in one ‘regular’ (S)ubjective relational Fit does any type of coeval a priori functioning arise. Such functions convey a type of ‘meaning-full order’, homeostasis, or non-chaotic data (subjective ontology). Thus, a minimal dualist-triune is seen again: two (O)bjects, in one (S)ubjective coeval-functioning Fit (. . . O-hSi-O. . . ). Such non-adaptive O-S roles are inter-dependent (‘fragile’ [67], [68]); if we remove either O from this minimal scene the hSi Fit also vanishes, in a coeval manner. And, eliminating hSi aspects from either O perforce collapses two objects to one, returns a ‘space-time singularity’, in an equally coeval manner. In summary, adaptive and non-adaptive dynamics hold a dualist-triune ‘first principle’, as Generative Entropy within Generic Entropy ≈ Signal Literacy, akin to Hegelian dialectics (thesis + anti-thesis = synthesis). Further, this SBtD model initiates a five-part philosophic reconciliation. It names an (O)bject-and-(S)ubject ontology in a coeval a priori role. Next, naming an (O)bject-and-(S)ubject epistemology lies ahead — an innately-creative ‘informational effect’ detailed in ‘What is Meaningful Information?’, the second question. D. Three Remaining Questions: A Cursory Synopsis and Conclusions Detail on ‘What is Information?’, the first question, is vital as it sustains all ensuing intelligent roles. This a priori study sets a base from which we can start to ponder HLAI and ever-more (Super) intelligent systems. It re-frames Shannon Signal Entropy (a classic view of information) as a type of ‘S-O Fit-ness’ tied to Darwinian Vari-ability, across evermore meaning-full and complex informational vistas — an


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

SBtD, S V OV , dualist-triune continuum. Next, the intelligent exploitation (optimization) of that informational continuum remains to be detailed, as an initial HLAI. To frame that ‘optimal HLAI’, the dualist-triune role is next expanded via three remaining questions; thus completing the five-part philosophic reconciliation. Those three remaining questions are not covered here. As stated earlier, each question merits its own analysis as each entails distinct challenges. To detail all facets here would greatly increase this paper’s length and muddy the step-wise logic needed for a clear view of HLAI pathways — where, as noted before, confusion already abounds. I thus refer readers to the A PPENDIX for more detail. Still, a brief survey of the three remaining questions follows. The second question, ‘What is Meaningful Information?’, covers the second part of ‘targeted cause-and-effect’. It names another dualist-triune, starting with non-adaptive and adaptive views — empiric base dualism. Non-adaptive materially-direct roles (a de facto O-S Nature) mark an ‘object epistemology’ seen in the standard model of physics and the periodic table. Next, adaptivity (subject epistemology) is split into discrete adaptive code (DNA) and temporally adaptive acts (survival), for a dualist-triune of: materially-direct/non-adaptive roles, and adaptive DNA + adaptive acts. Thus, object-and-subject epistemologies are framed, to match the prior object-andsubject ontologies. Also, the scientific models cited herein add support for a priori Signal Literacy. Lastly, S-O ⇒ O-S selectability, as a ‘functionally reconciled’ mimicry of O-S Nature (jointly conveying Generative Entropy), is noted as affording further scale-able gains. The third question, ‘How is Adaptive Logic Generated?’, details adaptive acts or ‘survival heuristics’ as learned/compelled space-time behavior. The dualist-triune named here is basic levers (fulcrum, effort, and load; and three lever classes), with ensuing space-time implications for many life forms and for mechanical systems — a natural ‘general empiric bridge’. General lever logic (adaptive space-time heuristics) is examined, and then extended to the advent of simple machines, and beyond (logical/empiric scale-ability). This heuristic vista completes the prior five-part philosophic reconciliation. Lastly, generative (entropic) differentiation as ‘lever heuristics’ is framed in relation to Signal Entropy and open systems, to further support the above link between Shannon signals and ‘useful noise’ in (Darwinian) open systems. ‘How is Adaptivity Selectively Optimized?’, the fourth question, details Selection Dynamics as opposed dualist-triunes on an evolutionary landscape. It marks classic threefold selection pressure (purifying, divisive, and directional: a 3-2 form), alongside classic agent responses (freeze, flight, and fight: a 3-2 form), yielding a reductive fractal form (a 2-3 triune as a . . . 3-2 – 3-2. . . continuum). Selection Dynamics are then expanded further to frame a general ‘cognitive psychology’ as progressively more-complex logical depth (cognitive learning) or advancing survival intelligence/skills. Again, a concise account of all four questions is given in a video abstract at: https://youtu.be/FsztGuSvv5U. Also, papers listed in the A PPENDIX (and summarized above) provide added detail. In closing, initial implications of this study are:

7

1) Some manner of HLAI seems likely in a dualist-triune (S-O-V) computational universe. 2) In that universe, (S)ubject-(O)bject (V)ariation mark ‘generative roles’, with adjacent possibilities [69] and stepping-stones [70] as scale-able adaptive options. 3) For S-O ⇒ O-S optimization, some ‘serial process’ is needed to selectively: generate, identify, and reduce those adaptive options, most evident as EvNS. 4) Hence, fully-autonomous HLAI is unlikely as some type of hands-on processing (trial-and-error) of options is needed, driven by humans or EvNS, but where: - HLAI has no ‘innate needs’ to drive optimization. It must be programmed to emulate such needs (requires external human intent). - HLAI ‘experiential content’ needed to frame such needs, must be programmed (human intent), but moreoften arises via general S-O (V)ariation and eons of EvNS programming-and-debugging. - Erratic (V)ariation inhibits the likelihood of any ‘eternal optimums’, HLAI or other, at many levels. 5) Still, a more-direct functional/predictive HLAI may arise if a more-exact reductive view is found (re fourth question), beyond the dualist-triune detailed herein. Despite ‘lingering issues’, just as Shannon signals continue to afford new IT gains, a ‘theory of meaning’ that joins Signal Entropy and EvNS should offer similar gains, in a true computational universe. Augmented human innovation (an SBtD ‘insight engine’) would drive material advances on many fronts, that then afford gains in other areas, including HLAI — just as seen with modern ‘universal computing devices’ or computers. But more research is needed to support that result. IV. APPENDIX: S UPPLEMENTARY M ATERIAL Title: THE ‘HARD PROBLEM’ OF CONSCIOUSNESS — details flaws in one of the more popular philosophic views, from among many, diverse philosophic views noted above. Link: https://issuu.com/mabundis/docs/hardproblem Abstract: To frame a useful model of information, intelligence, ‘consciousness’, or the like, one must address a claimed Hard Problem (Chalmers, 1996) — the idea that phenomenal roles fall beyond all scientific thought. While the Hard Problem’s veracity is often debated, analogues to this claim arise elsewhere in the literature as a ‘symbol grounding problem’ (Harnad, 1990), ‘solving intelligence’ (Burton-Hill, 2016), Shannon and Weaver’s (1949) ‘theory of meaning’, etc. Thus, the ‘issue of phenomena’ or innate subjectivity holds sway in many circles as being ‘unsolved’. Also, direct analysis of the Hard Problem is rare, where researchers more-often assert that: 1) it is a patently absurd view unworthy of study, or 2) it marks a truly intractable issue defying such study, where neither side offers useful detail. A ‘Hard Problem debate’ thus endures. This essay takes a third approach of directly assessing the Hard Problem’s assertion contra natural selection in the formation of human consciousness. It examines Chalmers’s logic and evidence for this view, taken from his articles over the years. The aim is to frame a case where it then becomes possible to attempt


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

resolving this ‘issue of phenomena’ (7 pages: 3,4000 words). Title: A GENERAL THEORY OF MEANING: Empiric Informational Fundaments — details the second question noted above, and thus complements a prior introductory paper. Link: https://issuu.com/mabundis/docs/abundis.tom Abstract: This paper addresses a lack of meaning in information theory, as noted by Claude Shannon and Warren Weaver. It details science (the standard model of physics, the periodic table, etc.) as meaning-full information, with a bridge named to also map less-precise informational roles. That bridge conveys a natural informatics or a general theory of meaning, as three informational types, with logical depth: 1) materially-direct/non-adaptive, 2) discretely adaptive, and 3) temporally adaptive. Informational types thus recast conflicts seen in the above roles, with Bateson-like ‘differentiated (entropic) differences’ deriving a dualist-triune informational continuum. This paper extends an earlier a priori study of information and intelligence by detailing natural empiric (scientific) features (9 pages: 6,700 words, rev7/2018). Title: NATURAL MULTI-STATE COMPUTING – Engineering Evolution: Simple machines and beyond — supports the third question noted above. Link: https://issuu.com/mabundis/docs/multistate Abstract: This essay covers adaptive logic in humans and other agents, and complements a related ‘general theory of meaning’ (Abundis, 2018). It names informational roles needed for minimal (space-time) adaptivity as a direct experience, versus the ‘reasoning by analogy’ typical of artificial intelligence. It shows how levers, as a computational trope (adaptive template), typify meaningful adaptive roles for many agents and later afford the advent of simple machines. To develop the model: 1) Three lever classes are shown to compel a natural informatics in diverse agents. 2) Those lever classes are next deconstructed to derive a ‘scalable creativity’. 3) That creative logic is then detailed as holding three entropically generative computational roles. 4) Lastly, that adaptive logic is used to model tool creation. Thus, the analysis frames systemic creativity (natural disruptions and evolution) in various roles (discrete, continuous, and bifurcations) for many agents, on diverse levels, to depict a ‘general adaptive intelligence’ (16 pages; 6,600 words). Title: SELECTION DYNAMICS AS AN ORIGIN OF REASON: Causes of Cognitive Information — covers the fourth question noted above. Link: https://issuu.com/mabundis/docs/lgcn.fin.4.15 Abstract: This study explores ‘adaptive cognition’ in relation to agents striving to abide entropic forces (natural selection). It enlarges on a view of Shannon (1948) information theory and a ‘theory of meaning’ (Abundis, 2018) developed elsewhere. The analysis starts by pairing classic selection pressure (purifying, divisive, and directional selection ) and agent acts (as flight, freeze, and fight responses), to frame a basic reductive model. It next details ensuing environsagent exchanges as marking Selection Dynamics, for a ‘general adaptive model’. Selection Dynamics are then shown

8

in relation to chaos theory, and a fractal-like topology, for an initial computational view. Lastly, the resulting dualisttriune topology is detailed as sustaining many evolutionary and cognitive roles, thus marking an extensible adaptive informational/cultural fundament (13 pages: 5,700 words). R EFERENCES [1] C. E. Shannon and W. Weaver, Advances in a mathematical theory of communication. Urbana, IL: University of Illinois Press, 1949. [2] J. Searle, “Consciousness in artificial intelligence,” in Talks at Google. Mountain View, CA: Google Inc., November 2015, https://www.youtube. com/watch?v=rHKwIYsPXLg. [3] T. Nagel, “What is it like to be a bat?” The Philosophical Review LXXXIII, vol. 4, pp. 435–50, 1974. [4] J. J. Gibson, “The theory of affordances,” in Perceiving, acting, and knowing: Toward an ecological psychology, R. Shaw and J. Bransford, Eds. Hillsdale, N.J: University of Minnesota/Lawrence Erlbaum Associates, 1977. [5] B. M. Clinchy and H. Gardner, “Frames of mind: The theory of multiple intelligences,” 1984. [6] E. O. Wilson, “The four great books of Darwin,” in Sackler colloquim: In the light of evolution IV. National Academy of Sciences, 2009, http: //sackler.nasmediaonline.org/2009/evo iv/eo wilson/eo wilson.html. [7] A. Wagner, Arrival of the fittest: Solving evolution’s greatest puzzle. London, U.K.: One World, 2015. [8] G. Bateson, Mind and nature: A necessary unity. New York, NY: Dutton, 1979. [9] S. Wolfram, “A new kind of science: A 15-year view,” http://blog. stephenwolfram.com/2017/05/a-new-kind-of-science-a-15-year-view/ #more-13536, May 2017. [10] ——, “Artificial intelligence & the future of civilization,” in 2015 Wolfram Technology Conference, March 2016, https://youtu.be/cbu bCQ2Lkg. [11] C. Burton-Hill, “The superhero of artificial intelligence,” The Guardian, February 2016. [12] E. Boyden, “Engineering revolutions,” in The 2016 world economic forum. Davos, Switzerland: World Economic Forum (WEF), February 2016, http://mcgovern.mit.edu/news/videos/ engineering-revolutions-ed-boyden-at-the-2016-world-economic-forum. [13] B. Josephson, “Biological organisation as the true foundation of reality,” in 66th Lindau nobel laureates talks. Cambridge, UK: University of Cambridge, July 2016, http://sms.cam.ac.uk/media/2277379. [14] E. W. Dijkstra, “The threats to computing science,” in Association for computing machinery: South central regional conference. Austin, Texas: Association for Computing Machinery, November 1984, http: //www.cs.utexas.edu/users/EWD/ewd08xx/EWD898.PDF. [15] S. Harnad, “The symbol grounding problem,” Philosophical Explorations, vol. 42, pp. 335–346, 1990. [16] J. Gleick, The Information. New York, NY: Knopf Doubleday Publishing Group, 2011. [17] L. Floridi, The 4th revolution: How the infosphere is reshaping human reality. Oxford, UK: Oxford University Press, 2016. [18] D. C. Dennett, “Aching voids and making voids,” The Quarterly Review of Biology, vol. 88, pp. 321–324, December 2013. [19] R. Feynman, “The messenger lectures: The relation of mathematics and physics,” https://www.microsoft.com/en-us/research/project/ tuva-richard-feynman/, Cornell University, Ithaca, NY, November 1964. [20] J. M. Smith, “Life at the edge of chaos?” The New York Review of Books, vol. 42, pp. 28–30, 1995. [21] K. O. Stanley, J. Lehman, and L. Soros, “Open-endedness: The last grand challenge you’ve never heard of,” O’Reilly Ideas (What’s on our radar): AI, December 2017. [22] G. R. Peterson, “Demarcation and the scientific fallacy,” Zygon: Journal of religion and science, vol. 38, no. 4, pp. 751–761, December 2003. [23] K. R. Popper, Conjectures and refutations: The growth of knowledge, 4th ed. London, UK: Routledge & Kegan Paul, 1972. [24] J. Searle, “The mystery of consciousness: An exchange,” The New York Review of Books, December 1995. [25] D. C. Dennett, Consciousness explained. Boston, MA: Little, Brown and Co., 1991. [26] P. A. Murphy, “Chalmers’ naturalistic dualism vs. Dennett’s thirdperson absolutism,” http://paulaustinmurphypam.blogspot.ch/2015/10/ chalmers-naturalistic-dualism-vs.html, October 2015.


IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, FILES, VOL. XX, NO. X, MONTH YEAR

[54] [27] D. J. Chalmers, The conscious mind: In search of a fundamental theory. Oxford, UK: Oxford University Press, 1998. [28] M. Abundis, “The ‘hard problem’ of consciousness,” 2016, https://issuu. com/mabundis/docs/hardproblem. [55] [29] S. Kauffman, “Beyond the stalemate: Conscious mind-body - quantum mechanics - free will - possible panpsychism - possible interpretation of quantum enigma,” https://arxiv.org/abs/1410.2127, 2014, (last revised 21 Oct 2014 (this version, v2)). [56] [30] J. Horgan, “Scientific seeker Stuart Kauffman on free will, god, ESP and other mysteries,” https://blogs.scientificamerican.com/cross-check/ scientific-seeker-stuart-kauffman-on-free-will-god-esp-and-other-mysteries/, 2015. [31] C. Koch, Consciousness: Confessions of a romantic reductionist. Cam- [57] bridge, MA: MIT Press, 2012. [32] D. C. Dennett, “The brain and its boundaries,” Times Literary Supple[58] ment (London), May 1991. [33] C. E. Shannon, “A mathematical theory of communication,” Bell System [59] Technical Journal, vol. 27, no. 3, July 1948. [34] H. global storage technologies (GST), “A brief history of hard drives,” [60] ZDNet, January 2006. [35] L. Floridi, “Semantic conceptions of information,” in The stanford [61] encyclopedia of philosophy, spring 2017 ed., E. N. Zalta, Ed. Stanford, CA: Metaphysics Research Lab, Stanford University, 2017, https://plato. stanford.edu/archives/spr2017/entries/information-semantic/. [36] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, [62] pp. 436–444, May 2015. [37] M. Rosa, “Good AI,” https://www.roadmapinstitute.org, 2017. [63] [38] Y. LeCun and C. Manning, “Deep learning, structure and innate priors, a discussion between Yann LeCun and Christopher Manning,” http://www.abigailsee.com/2018/02/21/ [64] deep-learning-structure-and-innate-priors.html, Stanford, CA, February 2018. [39] R. M. French, “Catastrophic forgetting in connectionist networks,” [65] Trends in cognitive sciences, vol. 3, no. 4, pp. 128–135, 1999. [40] A. Karpthy, “AlphaGo, in context,” Medium, May 2017, https://medium. com/@karpathy/alphago-in-context-c47718cb95a5. [66] [41] S. LeVine, “Artificial intelligence pioneer says we need to start over,” [67] AXIOS newsletter, September 2017. [42] Y. LeCun and G. Marcus, “Does AI need more innate machinery?” https: //www.youtube.com/watch?v=vdWPQ6iAkT4, New York, NY, October [68] 2017. [43] A. Gopnik, “The distinctive intelligence of young children,” in NIPS Symposium: The kinds of intelligence. Long Beach, California.: [69] Neural Information Processing Systems (NIPS), December 2017, http: [70] //mdcrosby.com/blog/koi.html. [44] ICLR, “Workshop track,” in International conference on learning representation (ICLR). ICLR, April 2017, out of nearly 160 ‘speculative proposals’ submitted, only 2 covered HLAI. [45] G. Marcus, “Is artificial intelligence stuck in a rut?” in Intelligent machines, ser. EmTech DIGITAL. MIT Technology Review, March 2017, https://www.technologyreview.com/s/603945/ is-artificial-intelligence-stuck-in-a-rut/. [46] B. Goertzel, “Toward a grand unified theory of AGI: A survey of approaches,” in The 10th conference on artificial general intelligence. Melbourne, Australia.: Artificial General Intelligence Society, August 15 2017, http://www.agi-conf.org/2017/schedule/. [47] J. Bach, Principles of synthetic intelligence: PSI : An architecture of motivated cognition. Oxford, UK: Oxford University Press., 2009. [48] J. E. Laird, C. Lebiere, and P. S. Rosenbloom, “A standard model of the mind,” in AAAI 2017 Fall symposium. Arlington, Virginia: Association for the advancement of artificial intelligence (AAAI), November 2017, http://sm.ict.usc.edu. [49] S. Markan, “Some things i’ve learned from working on AGI for 10+ years,” http://markan.net/agi10years.html, 2018. [50] J. Searle, “Biological naturalism,” 2004, http://socrates.berkeley.edu/ ∼jsearle/BiologicalNaturalismOct04.doc. [51] D. Chapman, “General definition of information: GDI (Floridi),” http:// www.intropy.co.uk/2010/04/general-definition-of-information-gdi.html, 2010. [52] S. Carroll, “Moving naturalism forward,” in An interdisciplinary workshop. Stockbridge, MA: California Institute of Technology: Division of Physics, Mathematics, and Astronomy, October 2012, https://www. preposterousuniverse.com/naturalism2012/. [53] S. Legg and M. Hutter, “Universal intelligence: A definition of machine intelligence,” Minds and Machines, vol. 17, no. 4, pp. 391–444, 2007.

9

M. Hutter, “Advances in universal artificial intelligence,” in The 10th conference on artificial general intelligence. Melbourne, Australia.: Artificial General Intelligence Society, August 2017, http://www.agi-conf. org/2017/schedule/. P. Wang, “Understanding understanding,” in The 10th conference on artificial general intelligence. Melbourne, Australia: Artificial General Intelligence Society, August 2017, http://www.agi-conf.org/2017/ schedule/. P. Cheeseman, “Recursively self-improving AI,” in The 10th conference on artificial general intelligence. Melbourne, Australia.: Artificial General Intelligence Society, August 2017, http://www.agi-conf.org/ 2017/schedule/. S. D. Baum, “A survey of artificial general intelligence projects for ethics, risk, and policy,” https://ssrn.com/abstract=3070741, November 2017, working paper 17-1, rev. 12 Nov 2017. T. W. Deacon, Incomplete nature: How mind emerged from matter. New York, NY: W.W. Norton & Co., 2013. J. Fodor, “What are trees about?” London Review of Books, vol. 34, no. 10, p. 34, May 2012. C. McGinn, “Can anything emerge from nothing?” New York Review of Books, June 2012. C. E. Mackenzie, “Coded character sets, history and development (PDF),” in The systems programming series (1 ed.). Boston,MA: Addison-Wesley Publishing Company, Inc., 1980, https://textfiles. meulie.net/bitsaved/Books/Mackenzie CodedCharSets.pdf. G. J. Klir, An approach to general systems theory. New York, NY: Van Nostrand Reinhold, 1971. C. H. Bennett, “Logical depth and physical complexity,” in The Universal Turing Machine a Half-Century Survey, R. Herken, Ed. Oxford, UK: Oxford University Press, 1988, pp. 227–257. H. Robinson, “Dualism,” in The stanford encyclopedia of philosophy, winter 2016 ed., E. N. Zalta, Ed. Stanford, CA: Metaphysics Research Lab, Stanford University, 2016, see notes on property dualism. L. Stubenberg, “Neutral monism,” in The stanford encyclopedia of philosophy, winter 2016 ed., E. N. Zalta, Ed. Metaphysics Research Lab, Stanford University, 2016, see notes on dual aspect. M. Abundis, “Selection dynamics as an origin of reason: causes of cognitive information,” 2017, https://issuu.com/mabundis/docs/lgcn.fin.4.15. A. Danchin, “Innovation & technology: The anti-fragile life of the economy,” https://www.project-syndicate.org/commentary/ the-anti-fragile-life-of-the-economy?barrier=accesspaylogl, 2012. B. Pauker, “Epiphanies from nassim nicholas taleb,” Foreign Policy, October 2012. S. A. Kauffman, Investigations. New York, NY: Oxford University Press, 2003. K. O. Stanley and J. Lehman, Why greatness cannot be planned: The myth of the objective. Springer International Publishing, 2015.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.