shabakehayeasabi

Page 1

‫ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ـ ﻣﺼﻨﻮﻋﻰ«‬ ‫)‪Artificial Neural Networks (ANN‬‬ ‫ﺗﺄﻟﻴﻒ‪ :‬ﻧﻴﻤﺎ ﻗﺎﺳﻢﻧﮋﺍﺩ ﻣﻘﺪﻡ‬ ‫ﺩﺍﻧﺸﮕﺎﻩ ﺁﺯﺍﺩ ﺍﺳﻼﻣﻲ ـ ﻭﺍﺣﺪ ﺍﻳﻠﺨﭽﻲ‬ ‫‪E-mail: Nima175@yahoo.com‬‬

‫ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒــﻰ ﻣﺼﻨﻮﻋــﻰ )‪Artificial‬‬ ‫ﻧﮕﺎﻫﻲ ﺑﻪ ﻛﻞ ﻣﻄﻠﺐ‬ ‫‪ (Neural Networks=ANN‬ﺩﺭ ﺩﻫﻪ ‪1940‬‬ ‫ﺑﺮﺍﻯ ﺍﻭﻟﻴﻦ ﺑﺎﺭ ﻣﻌﺮﻓﻰ ﺷــﺪﻧﺪ ﻭ ﺗــﺎ ﺑﻪ ﺍﻣﺮﻭﺯ ﻧﻴﺰ‬ ‫ﻛﺎﺭﺑﺮﺩﻫﺎﻯ ﻓﺮﺍﻭﺍﻧﻰ ﺩﺍﺷــﺘﻪﺍﻧﺪ؛ ﻭﻟﻰ ﺑﺎ ﺍﻳﻦ ﻫﻤﻪ ﺩﺭ ﺣﻮﺯﻩ ﻣﺪﻳﺮﻳﺖ ﻭ ﻣﻬﻨﺪﺳﻰ ﺍﺯ‬ ‫ﻋﻠﻮﻡ ﻧﻮﭘﺎ ﺑﻪ ﺣﺴﺎﺏ ﻣﻰﺁﻳﺪ‪ .‬ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻗﺎﺑﻠﻴﺖ ﺍﺳﺘﺨﺮﺍﺝ ﺍﻃﻼﻋﺎﺕ ﻣﻔﻴﺪ ﻭ‬ ‫ﻛﺎﺭﺑﺮﺩﻯ ﺍﺯ ﺩﺍﺩﻩﻫﺎﻯ ﺧﺎﻡ ﺭﺍ ﺩﺍﺭﺍ ﻣﻰﺑﺎﺷﻨﺪ‪ .‬ﺍﻳﻦ ﺷﺒﻜﻪﻫﺎ ﺍﺑﺰﺍﺭﻯ ﻗﺪﺭﺗﻤﻨﺪ ﻫﺴﺘﻨﺪ‬ ‫ﻛﻪ ﻣﻰﺗﻮﺍﻧﻨﺪ ﺍﻟﮕﻮﻫﺎﻯ ﻣﺘﻔﺎﻭﺕ ﺭﺍ ﺷﻨﺎﺳﺎﻳﻰ ﻛﺮﺩﻩ ﻭ ﻳﺎ ﺍﺯ ﺍﻃﻼﻋﺎﺕ ﭘﻴﭽﻴﺪﻩ ﻭ ﮔﺎﻩ‬ ‫ﻣﺨﺪﻭﺵ ﻧﺘﺎﻳﺠﻰ ﻧﺰﺩﻳﻚ ﺑﻪ ﻭﺍﻗﻌﻴﺖ ﺑﻪ ﺩﺳﺖ ﺁﻭﺭﻧﺪ‪.‬‬ ‫ﻗــﺪﺭﺕ ﻭﺍﻗﻌﻰ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﺗﻮﺍﻥ ﺁﻣﻮﺯﺵﭘﺬﻳﺮﻯ ﺁﻧﻬﺎﺳــﺖ‪ .‬ﺑﻪ ﺍﻳﻦ ﻣﻔﻬﻮﻡ‬ ‫ﻛﻪ ﺍﻳﻦ ﺷــﺒﻜﻪﻫﺎ ﻗﺎﺩﺭﻧﺪ ﺗﺎ ﺍﺯ ﺭﻭﻯ ﺍﻟﮕﻮﻫﺎﻯ ﺁﻣﻮﺯﺷﻰ )ﻭﺭﻭﺩﻯﻫﺎ ﻭ ﺧﺮﻭﺟﻰﻫﺎﻯ‬ ‫ﻣﺘﻨﺎﺳــﺐ(‪ ،‬ﺑﺎ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺍﻟﮕﻮﺭﻳﺘﻢﻫﺎﻯ ﻣﺨﺘﻠﻒ ﻳﺎﺩﮔﻴﺮﻯ‪ ،‬ﺭﺍﺑﻄﻪ ﺑﻴﻦ ﻣﺘﻐﻴﺮﻫﺎ ﺭﺍ‬ ‫ﺷﻨﺎﺳﺎﻳﻰ ﻧﻤﺎﻳﻨﺪ‪.‬‬ ‫ﺩﺭ ﺍﻳﻦ ﻧﻮﺷﺘﺎﺭ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻣﺼﻨﻮﻋﻰ ﺭﺍ ﺑﻪ ﻃﻮﺭ ﺧﻼﺻﻪ ﻣﻌﺮﻓﻰ ﻧﻤﻮﺩﻩﺍﻳﻢ‬ ‫ﻭ ﺑﻪ ﺍﻧﻮﺍﻉ ﺷــﺒﻜﻪﻫﺎ‪ ،‬ﻳﺎﺩﮔﻴﺮﻯﺷﺎﻥ‪ ،‬ﻣﺰﺍﻳﺎ ﻭ ﻣﺤﺪﻭﺩﻳﺖﻫﺎ ﻭ ﻧﻴﺰ ﭼﮕﻮﻧﮕﻰ ﻋﻤﻠﻜﺮﺩ‬ ‫ﺁﻧﻬﺎ ﭘﺮﺩﺍﺧﺘﻪ ﺷﺪﻩ ﺍﺳﺖ‪.‬‬

‫ﻣﻘﺪﻣﻪ‬

‫ﺩﺭ ﺳــﺎﻟﻴﺎﻥ ﺍﺧﻴﺮ ﺷــﺎﻫﺪ ﺣﺮﻛﺘﻲ ﻣﺴــﺘﻤﺮ ﺍﺯ ﺗﺤﻘﻴﻘــﺎﺕ ﺻﺮﻓﺎً ﻧﻈﺮﻯ ﺑــﻪ ﺗﺤﻘﻴﻘﺎﺕ‬ ‫ﻛﺎﺭﺑﺮﺩﻱ ﺑﻮﺩﻩﺍﻳﻢ‪ ،‬ﺑﺨﺼﻮﺹ ﺩﺭ ﺯﻣﻴﻨﻪ ﭘﺮﺩﺍﺯﺵ ﺍﻃﻼﻋﺎﺕ‪ ،‬ﺑﺮﺍﻱ ﻣﺴﺎﺋﻠﻲ ﻛﻪ ﺑﺮﺍﻱ ﺁﻧﻬﺎ‬ ‫ﺭﺍﻩﺣﻠﻲ ﻣﻮﺟﻮﺩ ﻧﻴﺴﺖ ﻭ ﻳﺎ ﺑﻪ ﺭﺍﺣﺘﻲ ﻗﺎﺑﻞ ﺣﻞ ﻧﻴﺴﺘﻨﺪ‪ .‬ﺑﺎ ﻋﻨﺎﻳﺖ ﺑﻪ ﺍﻳﻦ ﺍﻣﺮ‪ ،‬ﻋﻼﻗﻪﺍﻱ‬ ‫ﻓﺰﺍﻳﻨﺪﻩ ﺩﺭ ﺗﻮﺳﻌﻪ ﻧﻈﺮﻯ ﺳﻴﺴﺘﻢﻫﺎﻱ ﺩﻳﻨﺎﻣﻴﻜﻲ ﻫﻮﺷﻤﻨﺪ ﻣﺪﻝ ﺁﺯﺍﺩ ﺍﻳﺠﺎﺩ ﺷﺪﻩ ﺍﺳﺖ‬ ‫ﻛﻪ ﻣﺒﺘﻨﻲ ﺍﺳﺖ ﺑﺮ ﭘﺮﺩﺍﺯﺵ ﺩﺍﺩﻩﻫﺎﻱ ﺗﺠﺮﺑﻲ ‪ .ANN‬ﺍﻳﻦ ﺳﻴﺴﺘﻢﻫﺎ ﺩﺍﻧﺶ ﻳﺎ ﻗﺎﻧﻮﻥ‬ ‫ﻧﻬﻔﺘﻪ ﺩﺭ ﻭﺭﺍﻱ ﺩﺍﺩﻩﻫﺎ ﺭﺍ ﺑﻪ ﺳــﺎﺧﺘﺎﺭ ﺷــﺒﻜﻪ ﻣﻨﺘﻘﻞ ﻣﻲﻛﻨﻨﺪ‪ .‬ﺑﻪ ﻫﻤﻴﻦ ﺩﻟﻴﻞ ﺑﻪ ﺍﻳﻦ‬ ‫ﺳﻴﺴﺘﻢﻫﺎ‪ ،‬ﻫﻮﺷﻤﻨﺪ ﮔﻔﺘﻪ ﻣﻲﺷﻮﺩ؛ ﺯﻳﺮﺍ ﺑﺮ ﺍﺳﺎﺱ ﻣﺤﺎﺳﺒﺎﺕ ﺭﻭﻱ ﺩﺍﺩﻩﻫﺎﻱ ﻋﺪﺩﻱ ﻳﺎ‬ ‫ﻣﺜﺎﻝﻫﺎ‪ ،‬ﻗﻮﺍﻧﻴﻦ ﻛﻠﻲ ﺭﺍ ﻓﺮﺍ ﻣﻲﮔﻴﺮﻧﺪ ﻛﻪ ﺍﻳﻦ ﻋﻤﻞ ﻧﺸــﺄﺕ ﮔﺮﻓﺘﻪ ﺍﺯ ﺳﻴﺴﺘﻢ ﻋﻤﻠﻜﺮﺩ‬ ‫ﻣﻐﺰ ﻭ ﺍﻋﺼﺎﺏ ﻣﻐﺰﻯ ﺍﻧﺴﺎﻥ ﺍﺳﺖ‪.‬‬ ‫ﻣﻐﺰ ﺍﻧﺴﺎﻥ ﻣﻴﻠﻴﻮﻥﻫﺎ ﺳﻠﻮﻝ ﻋﺼﺒﻰ ﺩﺍﺭﺩ ﻛﻪ ﻭﻇﻴﻔﻪ ﺫﺧﻴﺮﻩ ﻛﺮﺩﻥ ﻭ ﭘﺮﺩﺍﺯﺵ ﺍﻃﻼﻋﺎﺕ‬ ‫ﺭﺍ ﺑﻪ ﻋﻬﺪﻩ ﺩﺍﺭﻧﺪ‪ .‬ﻳﻜﻰ ﺍﺯ ﺳــﻠﻮﻝﻫﺎﻯ ﻋﺼﺒﻰ‪ ،‬ﻣﻌﺮﻭﻑ ﺑﻪ ﻧﺮﻭﻥ‪ 1‬ﺍﺳــﺖ ﻛﻪ ﻓﻘﻂ ‪٪10‬‬ ‫ﺣﺠﻢ ﻣﻐﺰ ﺭﺍ ﺗﺸــﻜﻴﻞ ﻣﻰﺩﻫﺪ‪ .‬ﺳــﻠﻮﻝﻫﺎﻯ ﻋﺼﺒــﻰ ﻗﺎﺩﺭﻧﺪ ﺗﺎ ﺑﺎ ﺍﺗﺼــﺎﻝ ﺑﻪ ﻳﻜﺪﻳﮕﺮ‬ ‫ﺗﺸــﻜﻴﻞ ﺷﺒﻜﻪﻫﺎﻯ ﻋﻈﻴﻢ ﺑﺪﻫﻨﺪ‪ .‬ﮔﻔﺘﻪ ﻣﻰﺷﻮﺩ ﻛﻪ ﻫﺮ ﻧﺮﻭﻥ ﻣﻰﺗﻮﺍﻧﺪ ﺑﻪ ﻫﺰﺍﺭ ﺗﺎ ﺩﻩ‬ ‫ﻫﺰﺍﺭ ﻧﺮﻭﻥ ﺩﻳﮕﺮ ﺍﺗﺼﺎﻝ ﻳﺎﺑﺪ ﻭ ﮔﻤﺎﻥ ﻣﻰﺭﻭﺩ ﻣﻐﺰ ﺍﻧﺴﺎﻥ ﺍﺯ ﺗﻌﺪﺍﺩ ‪ 1011‬ﻧﺮﻭﻥ ﺗﺸﻜﻴﻞ‬ ‫‪1 . Neural‬‬

‫‪16‬‬ ‫ﺳﺎﻝ ﺑﻴﺴﺘﻢ ﺷﻤﺎﺭﻩ ‪127‬‬


‫ﺷﺪﻩ ﺍﺳﺖ‪.‬‬ ‫ﺩﺭ ﺗﺸــﺮﻳﺢ ﺷــﺒﻜﻪ ﻋﺼﺒﻰ ﺩﺍﺭﻳﻢ‪ :‬ﻋﺼﺐ ﻳﻚ ﻭﺍﺣﺪ ﺳــﻠﻮﻟﻲ ﺍﺳﺎﺳــﻲ ﺍﺯ ﺳﻴﺴﺘﻢ ﻣﻐﺰ‬ ‫ﺍﺳــﺖ‪ .‬ﻋﺼﺐ ﻳﻚ ﻋﻨﺼﺮ ﭘﺮﺩﺍﺯﺷﮕﺮ ﺳﺎﺩﻩ ﺍﺳﺖ ﻛﻪ ﺍﺯ ﻃﺮﻳﻖ ﻣﺴﻴﺮﻫﺎﻱ ﻭﺭﻭﺩﻱ ﺑﻪ ﻧﺎﻡ‬ ‫ﺩﻧﺪﺭﻳﺖﻫﺎ‪ 2‬ﻋﻼﺋﻤﻲ ﺭﺍ ﺍﺯ ﺳــﺎﻳﺮ ﺍﻋﺼﺎﺏ ﺩﺭﻳﺎﻓﺖ ﻛﺮﺩﻩ ﻭ ﺁﻧﻬﺎ ﺭﺍ ﺑﺎ ﻫﻢ ﺗﺮﻛﻴﺐ ﻣﻲﻛﻨﺪ‪.‬‬ ‫ﺍﮔﺮ ﺍﻳﻦ ﻋﻼﻣﺖ ﻭﺭﻭﺩﻱ ﻣﺮﻛﺐ ﺑﻪ ﺍﻧﺪﺍﺯﻩ ﻛﺎﻓﻲ ﻗﻮﻱ ﺑﺎﺷــﺪ‪ ،‬ﻋﺼﺐ ﺑﻪ ﺍﺻﻄﻼﺡ ﺷــﻠﻴﻚ‬ ‫ﻣﻲﻛﻨﺪ ﻭ ﻳﻚ ﻋﻼﻣﺖ ﺧﺮﻭﺟﻲ ﺭﺍ ﺩﺭ ﻃﻮﻝ ﺁﻛﺴــﻮﻧﻲ‪ 3‬ﻛﻪ ﺩﺭ ﺩﻧﺪﺭﻳﺖ ﺳــﺎﻳﺮ ﺍﻋﺼﺎﺏ‬ ‫ﻣﺘﺼﻞ ﺍﺳــﺖ ﺍﺭﺳــﺎﻝ ﻣﻲﻛﻨﺪ‪ .‬ﺷﻜﻞ )‪ (1‬ﻃﺮﺣﻲ ﺍﺯ ﻗﺴــﻤﺖﻫﺎﻱ ﻣﺨﺘﻠﻒ ﻳﻚ ﻋﺼﺐ‬ ‫ﺍﺳــﺖ‪ .‬ﻫﺮ ﻋﻼﻣﺘﻲ ﻛﻪ ﺩﺭ ﻃﻮﻝ ﺩﻧﺪﺭﻳﺖ ﻳﻚ ﻋﺼﺐ ﻓﺮﺳــﺘﺎﺩﻩ ﻣﻲﺷﻮﺩ‪ ،‬ﺍﺯ ﺳﻴﻨﺎﭘﺲ‪4‬‬ ‫ﻳﺎ ﺍﺗﺼﺎﻝ ﺳﻴﻨﺎﭘﺴــﻲ ﻋﺒﻮﺭ ﻣﻲﻛﻨﺪ‪ .‬ﺍﻳﻦ ﺍﺗﺼﺎﻝ‪ ،‬ﻳﻚ ﺷﻜﺎﻑ ﺑﺴﻴﺎﺭ ﻛﻮﭼﻚ ﺩﺭ ﺩﻧﺪﺭﻳﺖ‬ ‫ﺍﺳﺖ ﻛﻪ ﺑﺎ ﻧﻮﻋﻲ ﻣﺎﻳﻊ ﻫﺎﺩﻱ ﻋﺼﺒﻲ‪ 5‬ﭘﺮ ﺷﺪﻩ ﺍﺳﺖ‪.‬‬ ‫ﺍﻳــﻦ ﻣﺎﻳــﻊ ﻫﺎﺩﻱ ﻋﺼﺒــﻲ ﻋﻼﺋﻤﻲ ﺍﻟﻜﺘﺮﻳﻜﻲ ﺗﻮﻟﻴﺪ ﻣﻲﻛﻨﺪ ﻭ ﺑﻪ ﺳــﻮﻱ ﻫﺴــﺘﻪ ﻳﺎ‬ ‫ﺳــﻮﻣﺎﻱ‪ 6‬ﻋﺼﺐ ﻣﻲﻓﺮﺳــﺘﺪ‪ .‬ﺗﻨﻈﻴﻢ ﻣﻴﺰﺍﻥ ﻣﻘﺎﻭﻣﺖ ﻳﺎ ﻫﺪﺍﻳﺖ ﺍﻟﻜﺘﺮﻳﻜﻲ ﺍﻳﻦ ﻓﺎﺻﻠﻪ‬ ‫ﺳﻴﻨﺎﭘﺴــﻲ‪ ،‬ﻳﻚ ﻓﺮﺍﻳﻨﺪ ﺑﺴﻴﺎﺭ ﻣﻬﻢ ﺍﺳﺖ‪ .‬ﺑﻪ ﻋﻼﻭﻩ ﻫﻤﻴﻦ ﺗﻨﻈﻴﻢ ﻫﺪﺍﻳﺖ ﺑﺎﻋﺚ ﺣﻔﻆ‬ ‫ﻛﺮﺩﻥ ﻣﻄﺎﻟﺐ ﻭ ﻳﺎﺩﮔﻴﺮﻱ ﻣﻲﺷﻮﺩ‪.‬‬ ‫ﻭﻗﺘﻲ ﻗﺪﺭﺕ ﺳﻴﻨﺎﭘﺴــﻲ ﺍﻋﺼﺎﺏ ﺗﻨﻈﻴﻢ ﺷــﻮﺩ‪ ،‬ﻣﻐﺰ ﻳﺎﺩﮔﻴﺮﻱ ﻣﻲﻧﻤﺎﻳﺪ ﻭ ﺍﻃﻼﻋﺎﺕ ﺭﺍ‬ ‫ﺫﺧﻴﺮﻩ ﻣﻲﻛﻨﺪ‪.‬‬

‫ﻳﻚ ﺷﺒﻜﻪ ﻋﺼﺒﻲ ﻣﺼﻨﻮﻋﻲ ﺭﺍ ﻣﻲﺗﻮﺍﻥ ﺑﻪ ﺻﻮﺭﺕ ﺯﻳﺮ ﺗﻌﺮﻳﻒ ﻛﺮﺩ‪:‬‬ ‫»ﻳﻚ ﺳﻴﺴــﺘﻢ ﭘﺮﺩﺍﺯﺵ ﺩﺍﺩﻩﻫﺎ ﻛﻪ ﺍﺯ ﺗﻌﺪﺍﺩ ﺯﻳﺎﺩﻱ ﻋﻨﺎﺻﺮ ﭘﺮﺩﺍﺯﺷــﮕﺮ ﺳﺎﺩﻩ ﻭ ﺑﺴﻴﺎﺭ‬ ‫ﻣﺮﺗﺒﻂ ﺑﺎ ﻫﻢ )ﻳﻌﻨﻲ ﻫﻤﺎﻥ ﺍﻋﺼﺎﺏ ﻣﺼﻨﻮﻋﻲ( ﺗﺸــﻜﻴﻞ ﺷﺪﻩ ﺍﺳﺖ ﻭ ﺩﺭ ﺳﺎﺧﺘﺎﺭ ﺁﻥ ﺍﺯ‬ ‫ﭘﻮﺳﺘﻪ ﺩﻣﺎﻏﻲ ﻣﻐﺰ ﺍﻟﻬﺎﻡ ﮔﺮﻓﺘﻪ ﺷﺪﻩ ﺍﺳﺖ‪«.‬‬ ‫ﺩﺭ ﻳﻚ ﺗﻌﺮﻳﻒ ﺳــﻨﺘﻰ‪ ،‬ﻫﺎﻳﻜﻴﻦ ﻣﻰﮔﻮﻳﺪ‪ :‬ﺷــﺒﻜﻪ ﻋﺼﺒﻰ ﻋﺒﺎﺭﺕ ﺍﺳﺖ ﺍﺯ ﻣﺠﻤﻮﻋﻪﺍﻯ‬ ‫ﻋﻈﻴﻢ ﺍﺯ ﭘﺮﺩﺍﺯﺷــﮕﺮﻫﺎﻯ ﻣﻮﺍﺯﻯ ﻛﻪ ﺍﺳﺘﻌﺪﺍﺩ ﺫﺍﺗﻰ ﺑﺮﺍﻯ ﺫﺧﻴﺮﻩ ﺍﻃﻼﻋﺎﺕ ﺗﺠﺮﺑﻰ ﻭ ﺑﻪ‬ ‫ﻛﺎﺭﮔﻴﺮﻯ ﺁﻥ ﺩﺍﺭﻧﺪ ﻭ ﺍﻳﻦ ﺷــﺒﻜﻪ ﺩﺳــﺘﻜﻢ ﺍﺯ ﺩﻭ ﺑﺎﺑﺖ ﺷﺒﻴﻪ ﻣﻐﺰ ﺍﺳﺖ‪1 :‬ـ ﻣﺮﺣﻠﻪﺍﻯ‬ ‫ﻣﻮﺳــﻮﻡ ﺑــﻪ ﻳﺎﺩﮔﻴــﺮﻯ ﺩﺍﺭﺩ‪2 .‬ـ ﻭﺯﻥﻫﺎﻯ ﺳﻴﻨﺎﭘﺴــﻰ ﺟﻬﺖ ﺫﺧﻴﺮﻩ ﺩﺍﻧــﺶ ﺑﻪ ﻛﺎﺭ‬ ‫ﻣﻰﺭﻭﻧﺪ‪.‬‬

‫ﺷﻜﻞ ‪ :2‬ﻃﺮﺡ ﺷﻤﺎﺗﻴﻚ ﻋﺼﺐ ﻣﺼﻨﻮﻋﻰ‬ ‫‪n‬‬

‫* ﺟﻤﻊ ﻭﺭﻭﺩﻯﻫﺎﻯ ﻭﺯﻥﺩﺍﺭ ﺑﺮﺍﺑﺮ ﺍﺳﺖ ﺑﺎ‪:‬‬

‫‪I j = ∑ wij xi‬‬ ‫‪i =1‬‬

‫) ‪yi = f ( I j‬‬

‫ﺷﻜﻞ ‪ :1‬ﻗﺴﻤﺖﻫﺎﻱ ﻣﺨﺘﻠﻒ ﻳﻚ ﻋﺼﺐ‬

‫ﺍﻋﺼﺎﺏ ﻣﺼﻨﻮﻋﻲ‬ ‫ﻳﻚ ﻋﺼﺐ ﻣﺼﻨﻮﻋﻲ ﻣﺪﻟﻲ ﺍﺳــﺖ ﻛﻪ ﺍﺟﺰﺍﻱ ﺁﻥ ﺷﺒﺎﻫﺖ ﻣﺴﺘﻘﻴﻤﻲ ﺑﻪ ﺍﺟﺰﺍﻯ ﻭﺍﻗﻌﻲ‬ ‫ﺩﺍﺭﻧﺪ‪ .‬ﺷﻜﻞ )‪ (2‬ﻧﻤﺎﻳﻲ ﺍﺯ ﻳﻚ ﻋﺼﺐ ﻣﺼﻨﻮﻋﻲ ﺍﺳﺖ‪ .‬ﻋﻼﺋﻢ ﻭﺭﻭﺩﻱ ﺑﺎ ‪,X2 ,X1 , X0‬‬ ‫… ‪ Xn ,‬ﻣﺸــﺨﺺ ﺷــﺪﻩﺍﻧﺪ‪ .‬ﺍﻳﻦ ﻋﻼﺋﻢ‪ ،‬ﻣﺘﻐﻴﺮﻫﺎﻳﻲ ﭘﻴﻮﺳﺘﻪ ﻫﺴﺘﻨﺪ ﻭ ﻧﻪ ﭘﺎﻟﺲﻫﺎﻱ‬ ‫ﺍﻟﻜﺘﺮﻳﻜــﻲ ﻛﻪ ﺩﺭ ﻣﻐﺰ ﺭﺥ ﻣﻲﺩﻫﻨﺪ‪ .‬ﻫﺮ ﻳــﻚ ﺍﺯ ﺍﻳﻦ ﻣﻘﺎﺩﻳﺮ ﻭﺭﻭﺩﻱ ﺗﺤﺖ ﺗﺄﺛﻴﺮ ﻭﺯﻧﻲ‬ ‫)ﻛﻪ ﮔﺎﻩ ﻭﺯﻥ ﺳﻴﻨﺎﭘﺴﻲ ﻧﺎﻣﻴﺪﻩ ﻣﻲﺷﻮﺩ( ﻗﺮﺍﺭ ﻣﻲﮔﻴﺮﻧﺪ ﻛﻪ ﺗﺎﺑﻊ ﺍﻳﻦ ﻭﺯﻥ ﺷﺒﻴﻪ ﺍﺗﺼﺎﻝ‬ ‫ﺳﻴﻨﺎﭘﺴــﻲ ﺩﺭ ﻳﻚ ﻋﺼﺐ ﻭﺍﻗﻌﻲ ﺍﺳــﺖ‪ .‬ﺑﺴــﺘﻪ ﺑﻪ ﻣﻴﺰﺍﻥ ﻫﺪﺍﻳﺖ ﻳﺎ ﻣﻘﺎﻭﻣﺖ ﺟﺮﻳﺎﻥ‬ ‫ﻋﻼﺋﻢ ﺍﻟﻜﺘﺮﻳﻜﻲ‪ ،‬ﺍﻳﻦ ﻭﺯﻥﻫﺎ ﻣﻲﺗﻮﺍﻧﻨﺪ ﻣﺜﺒﺖ ﻳﺎ ﻣﻨﻔﻲ ﺑﺎﺷــﻨﺪ‪ .‬ﺍﻳﻦ ﻋﻨﺎﺻﺮ ﭘﺮﺩﺍﺯﺷــﮕﺮ‬ ‫ﺍﺯ ﺩﻭ ﻗﺴﻤﺖ ﺗﺸﻜﻴﻞ ﺷﺪﻩﺍﻧﺪ؛ ﻗﺴﻤﺖ ﺍﻭﻝ ﻭﺭﻭﺩﻱﻫﺎﻱ ﻭﺯﻥﺩﺍﺭ ﺭﺍ ﺑﺎ ﻫﻢ ﺟﻤﻊ ﻣﻲﺯﻧﺪ‬ ‫ﻭﻛﻤﻴﺘﻲ ﺑﻪ ﻧﺎﻡ ‪ I‬ﺑﻪ ﺩﺳــﺖ ﻣﻲﺁﻭﺭﺩ؛ ﻗﺴــﻤﺖ ﺩﻭﻡ ﻳﻚ ﺻﺎﻓﻲ ﻏﻴﺮ ﺧﻄﻲ ﺍﺳــﺖ ﻛﻪ‬ ‫ﻣﻌﻤﻮﻻً ﺗﺎﺑﻊ ﻓﻌﺎﻟﺴــﺎﺯﻱ‪ 7‬ﻧﺎﻣﻴﺪﻩ ﻣﻲﺷــﻮﺩ ﻭ ﺍﺯ ﻃﺮﻳــﻖ ﺁﻥ ﺟﺮﻳﺎﻥﻫﺎﻱ ﻋﻼﺋﻢ ﻭﺭﻭﺩﻱ‬ ‫ﺗﺮﻛﻴﺐ ﻣﻲﺷﻮﻧﺪ‪.‬‬ ‫‪2 .Dendrites‬‬ ‫‪3 .Axon‬‬ ‫‪4 .Synapse‬‬ ‫‪5 .Neurotransmitter Fluid‬‬ ‫‪6 .Soma‬‬ ‫‪7 .Activation Function‬‬

‫* ﺗﺎﺑﻊ ﻓﻌﺎﻟﺴﺎﺯﻯ ﺑﺮﺍﺑﺮ ﺍﺳﺖ ﺑﺎ‪:‬‬ ‫ﺍﻳﻦ ﻋﻨﺎﺻﺮ ﭘﺮﺩﺍﺯﺷــﮕﺮ ﻣﻌﻤﻮﻻً ﺩﺭ ﻻﻳﻪﻫﺎ ﻳﺎ ﺻﻔﺤﺎﺕ ﻣﻨﻈﻤﻰ ﻗﺮﺍﺭ ﮔﺮﻓﺘﻪﺍﻧﺪ؛ ﺑﻪ ﻃﻮﺭﻯ‬ ‫ﻛــﻪ ﺑﻴﻦ ﻻﻳﻪﻫﺎ ﺍﺭﺗﺒﺎﻃــﺎﺕ ﻛﺎﻣﻞ ﻭ ﻳﺎ ﺗﺼﺎﺩﻓــﻰ ﻭﺟﻮﺩ ﺩﺍﺭﺩ‪ .‬ﻻﻳــﻪ ﻭﺭﻭﺩﻯ ﺑﻪ ﻋﻨﻮﺍﻥ‬ ‫ﭘﺮﺩﺍﺯﺷﮕﺮﻯ ﺍﺳﺖ ﻛﻪ ﭘﺲ ﺍﺯ ﭘﺮﺩﺍﺯﺵ ﺩﺍﺩﻩﻫﺎﻯ ﻭﺭﻭﺩﻯ ﺁﻧﻬﺎ ﺭﺍ ﺑﻪ ﺷﺒﻜﻪ ﺍﺭﺍﺋﻪ ﻣﻰﺩﻫﺪ‪.‬‬ ‫ﻻﻳﻪ ﺑﺎﻻﻳﻰ‪ ،‬ﻻﻳﻪ ﺧﺮﻭﺟﻰ ﺍﺳــﺖ ﻛﻪ ﺧﺮﻭﺟﻰ ﺷﺒﻜﻪ ﺭﺍ ﺩﺭ ﭘﺎﺳﺦ ﺑﻪ ﻳﻚ ﻭﺭﻭﺩﻯ ﻧﺸﺎﻥ‬ ‫ﻣﻰﺩﻫﺪ ﻭ ﺳﺎﻳﺮ ﻻﻳﻪﻫﺎﻯ ﺑﻴﻦ ﺍﻳﻦ ﺩﻭ ﻻﻳﻪ‪ ،‬ﻻﻳﻪﻫﺎﻯ ﻣﻴﺎﻧﻰ ﻳﺎ ﭘﻨﻬﺎﻥ ﻧﺎﻣﻴﺪﻩ ﻣﻰﺷﻮﻧﺪ‪.‬‬ ‫ﺯﻣﺎﻧﻰ ﻛﻪ ﻣﻰﮔﻮﻳﻴﻢ ﺷــﺒﻜﻪ ﺍﺯ ‪ n‬ﻻﻳﻪ ﺗﺸﻜﻴﻞ ﺷﺪﻩ ﺍﺳــﺖ‪ ،‬ﻣﺎ ﺗﻨﻬﺎ ﻻﻳﻪﻫﺎﻯ ﻣﻴﺎﻧﻰ ﻭ‬ ‫ﻻﻳﻪ ﺧﺎﺭﺟﻰ ﺭﺍ ﻣﻰﺷــﻤﺎﺭﻳﻢ ﻭ ﻻﻳﻪ ﻭﺭﻭﺩﻯ ﺷــﻤﺎﺭﺵ ﻧﻤﻰﺷــﻮﺩ؛ ﭼﺮﺍ ﻛﻪ ﺍﻳﻦ ﻧﺮﻭﻥﻫﺎ‬ ‫ﻣﺤﺎﺳــﺒﻪﺍﻯ ﺭﺍ ﺍﻧﺠﺎﻡ ﻧﻤﻰﺩﻫﻨﺪ‪ .‬ﺑﻨﺎﺑﺮﺍﻳﻦ ﺷــﺒﻜﻪ ﺗﻚﻻﻳﻪ ﺷﺎﻣﻞ ﺷﺒﻜﻪﺍﻯ ﺑﺎ ﺗﻨﻬﺎ ﻳﻚ‬ ‫ﻻﻳﻪ ﺧﺮﻭﺟﻰ ﻣﻰﺑﺎﺷﺪ‪.‬‬

‫ﺷﻜﻞ‪ :3‬ﺳﺎﺧﺘﺎﺭ ﺷﺒﻜﻪ ﻋﺼﺒﻰ‬

‫‪17‬‬ ‫ﺳﺎﻝ ﺑﻴﺴﺘﻢ ﺷﻤﺎﺭﻩ ‪127‬‬


‫ﭘﻴﺸــﺮﻓﺖ ﻋﻠﻢ ﻋﺼﺐﺷﻨﺎﺳــﻰ ﺑﻪ ﻣﺤﻘﻘﺎﻥ ﺍﻳﻦ ﻓﺮﺻﺖ ﺭﺍ ﺩﺍﺩﻩ ﻛــﻪ ﻣﺪﻝﻫﺎﻯ ﺭﻳﺎﺿﻰ‬ ‫ﺍﻋﺼﺎﺏ ﺭﺍ ﺑﺮﺍﻯ ﺷﻨﺎﺳﺎﻳﻰ ﺭﻓﺘﺎﺭﺷﺎﻥ ﺑﻪ ﺩﺳﺖ ﺁﻭﺭﻧﺪ‪ .‬ﺍﻳﻦ ﺍﻳﺪﻩ ﺑﻪ ﺩﻫﻪ ‪ 1940‬ﻣﻴﻼﺩﻯ‬ ‫ﺑﺮﻣﻰﮔﺮﺩﺩ )ﺳــﺎﻝ ‪ ،(1943‬ﺯﻣﺎﻧﻰ ﻛﻪ ﻳﻚ ﻣﺪﻝ ﺍﻧﺘﺰﺍﻋﻰ ﺗﻮﺳﻂ ﻣﻚﻛﺎﻟﻨﺪ ﻭ ﭘﻴﺘﺲ ﺩﺭ‬ ‫ﺳــﺎﻝ ‪ 1943‬ﻣﻌﺮﻓﻰ ﺷــﺪ‪ .‬ﺭﻭﺯﻧﺒﺎﻟﺖ )‪ (1958‬ﺍﻟﮕﻮﺭﻳﺘﻢ ﻳﺎﺩﮔﻴﺮﻯ ﭘﺮﺳﭙﺘﺮﻭﻥ ﺭﺍ ﺍﺭﺍﺋﻪ‬ ‫ﺩﺍﺩ‪ .‬ﺩﺭ ﺍﻳﻦ ﺯﻣﺎﻥ‪ ،‬ﻭﻳﺪﺭﻭ ﻭ ﻫﻮﻑ ﻳﻚ ﺗﻐﻴﻴﺮ ﻣﻬﻤﻰ ﺩﺭ ﺍﻟﮕﻮﺭﻳﺘﻢ ﻳﺎﺩﮔﻴﺮﻯ ﭘﺮﺳــﭙﺘﺮﻭﻥ‬ ‫ﺍﻳﺠﺎﺩ ﻧﻤﻮﺩﻧﺪ ﻛﻪ ﺑﻪ ﻧﺎﻡ ﻗﺎﻧﻮﻥ ﻭﻳﺪﺭﻭ ـ ﻫﻮﻑ ﺷــﻨﺎﺧﺘﻪ ﺷــﺪ‪ .‬ﺳﭙﺲ ﻣﻴﻨﺴﻜﻰ ﻭ ﭘﺎﭘﺮﺕ‬ ‫ﻣﺤﺪﻭﺩﻳﺖﻫﺎﻯ ﻣﺪﻝ ﺷﺒﻜﻪ ﻋﺼﺒﻰ ﻳﻚ ﻻﻳﻪ ﺭﺍ ﻧﺸﺎﻥ ﺩﺍﺩﻧﺪ‪ .‬ﻛﻮﻫﻨﻦ )‪ (1977‬ﻣﺪﻝﻫﺎﻯ‬ ‫ﺣﺎﻓﻈﻪ ﺍﻧﺠﻤﻨﻰ ﺭﺍ ﺗﻮﺳﻌﻪ ﺩﺍﺩ‪.‬‬ ‫ﺩﺭ ﺍﻭﺍﺳــﻂ ﺩﻫﻪ ‪ 1980‬ﺍﻟﮕﻮﺭﻳﺘﻢ ﻳﺎﺩﮔﻴﺮﻯ ﭘﺲﺍﻧﺘﺸﺎﺭ ﺗﻮﺳــﻂ ﺭﻭﻣﻠﻬﺎﺭﺕ‪ ،‬ﻫﻴﻨﺘﻮﻥ ﻭ‬ ‫ﻭﻳﻠﻴﺎﻡ )‪ (1986‬ﺍﺭﺍﺋﻪ ﺷــﺪ ﻛﻪ ﺭﺍﻩﺣﻞ ﻗﺪﺭﺗﻤﻨﺪﻯ ﺑﺮﺍﻯ ﻳﺎﺩﮔﻴﺮﻯ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ‬ ‫ﭼﻨﺪﻻﻳﻪ ﻣﻰﺑﺎﺷﺪ‪.‬‬

‫ﺍﺟﺰﺍﻯ ﻳﻚ ﺷﺒﻜﻪ ﻋﺼﺒﻰ ﻋﺒﺎﺭﺗﻨﺪ ﺍﺯ‪:‬‬

‫ﻭﺭﻭﺩﻯﻫﺎ‪ :‬ﻭﺭﻭﺩﻯﻫﺎ ﻣﻰﺗﻮﺍﻧﻨﺪ ﺧﺮﻭﺟﻰ ﺳﺎﻳﺮ ﻻﻳﻪﻫﺎ ﺑﻮﺩﻩ ﻭ ﻳﺎ ﺁﻧﻜﻪ ﺑﻪ ﺣﺎﻟﺖ ﺧﺎﻡ ﺩﺭ ﺍﻭﻟﻴﻦ‬ ‫ﻻﻳﻪ ﻭ ﺑﺪﻳﻦ ﺻﻮﺭﺕﻫﺎ ﺑﺎﺷﻨﺪ‪ :‬ﺩﺍﺩﻩﻫﺎﻯ ﻋﺪﺩﻯ‪ ،‬ﻣﺘﻮﻥ ﺍﺩﺑﻰ‪ ،‬ﻓﻨﻰ‪ ،‬ﺗﺼﻮﻳﺮ ﻭ ﻳﺎ ﺷﻜﻞ‪.‬‬ ‫ﻭﺯﻥﻫﺎ‪ :‬ﻣﻴﺰﺍﻥ ﺗﺄﺛﻴﺮ ﻭﺭﻭﺩﻯ ‪ xi‬ﺑﺮ ﺧﺮﻭﺟﻰ ‪ y‬ﺗﻮﺳﻂ ﻭﺯﻥ ﺍﻧﺪﺍﺯﻩﮔﻴﺮﻯ ﻣﻰﺷﻮﺩ‪.‬‬ ‫ﺗﺎﺑﻊ ﺟﻤﻊ‪ :‬ﺩﺭ ﺷﺒﻜﻪﻫﺎﻯ ﺗﻚﻧﺮﻭﻧﻰ‪ ،‬ﺗﺎﺑﻊ ﺟﻤﻊ ﺩﺭ ﻭﺍﻗﻊ ﺧﺮﻭﺟﻰ ﻣﺴﺌﻠﻪ ﺭﺍ ﺗﺎ ﺣﺪﻭﺩﻯ‬ ‫ﻣﺸﺨﺺ ﻣﻰﻛﻨﺪ ﻭ ﺩﺭ ﺷﺒﻜﻪﻫﺎﻯ ﭼﻨﺪﻧﺮﻭﻧﻰ ﻧﻴﺰ ﺗﺎﺑﻊ ﺟﻤﻊ ﻣﻴﺰﺍﻥ ﺳﻄﺢ ﻓﻌﺎﻟﻴﺖ ﻧﺮﻭﻥ‬ ‫‪ j‬ﺩﺭ ﻻﻳﻪﻫﺎﻯ ﺩﺭﻭﻧﻰ ﺭﺍ ﻣﺸﺨﺺ ﻣﻰﺳﺎﺯﺩ‪.‬‬ ‫ﺗﺎﺑﻊ ﺗﺒﺪﻳﻞ )ﺗﺎﺑﻊ ﻓﻌﺎﻟﺴــﺎﺯﻯ(‪ :‬ﺑﺪﻳﻬﻰ ﺍﺳــﺖ ﻛﻪ ﺗﺎﺑﻊ ﺟﻤﻊ ﭘﺎﺳﺦ ﻣﻮﺭﺩ ﺍﻧﺘﻈﺎﺭ ﺷﺒﻜﻪ‬ ‫ﻧﻴﺴــﺖ‪ .‬ﺗﺎﺑﻊ ﺗﺒﺪﻳﻞ ﻋﻀﻮﻯ ﺿﺮﻭﺭﻯ ﺩﺭ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻣﺤﺴﻮﺏ ﻣﻰﮔﺮﺩﺩ‪ .‬ﺍﻧﻮﺍﻉ ﻭ‬ ‫ﺍﻗﺴﺎﻡ ﻣﺘﻔﺎﻭﺗﻰ ﺍﺯ ﺗﻮﺍﺑﻊ ﺗﺒﺪﻳﻞ ﻭﺟﻮﺩ ﺩﺍﺭﺩ ﻛﻪ ﺑﻨﺎ ﺑﻪ ﺍﻫﻤﻴﺖ ﻭ ﻧﻮﻉ ﻣﺴﺌﻠﻪ ﻛﺎﺭﺑﺮﺩ ﺩﺍﺭﻧﺪ‪.‬‬ ‫ﺍﻳﻦ ﺗﺎﺑﻊ ﺗﻮﺳﻂ ﻃﺮﺍﺡ ﻣﺴﺌﻠﻪ ﺍﻧﺘﺨﺎﺏ ﻣﻰﮔﺮﺩﺩ ﻭ ﺑﺮ ﺍﺳﺎﺱ ﺍﻧﺘﺨﺎﺏ ﺍﻟﮕﻮﺭﻳﺘﻢ ﻳﺎﺩﮔﻴﺮﻯ‪،‬‬ ‫ﭘﺎﺭﺍﻣﺘﺮﻫﺎﻯ ﻣﺴﺌﻠﻪ )ﻭﺯﻥﻫﺎ( ﺗﻨﻈﻴﻢ ﻣﻰﮔﺮﺩﺩ‪.‬‬ ‫ﺍﻧﻮﺍﻉ ﺗﻮﺍﺑﻊ ﻓﻌﺎﻟﻴﺖ‬ ‫ﺧﺮﻭﺟــﻰ ﻧﻬﺎﻳــﻰ ﻧــﺮﻭﻥ ﻣﺼﻨﻮﻋﻰ ﺑﺎ ﺍﺳــﺘﻔﺎﺩﻩ ﺍﺯ ﺗﻮﺍﺑﻊ ﻓﻌﺎﻟﻴﺖ ﻣﺤﺎﺳــﺒﻪ ﻣﻰﺷــﻮﺩ‪.‬‬ ‫ﻣﻌﻤﻮﻝﺗﺮﻳــﻦ ﺗﻮﺍﺑــﻊ ﻓﻌﺎﻟﻴﺘﻰ ﻛﻪ ﺩﺭ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒــﻰ ﻣﺼﻨﻮﻋﻰ ﺍﺯ ﺁﻧﻬﺎ ﺍﺳــﺘﻔﺎﺩﻩ‬ ‫ﻣﻰﻛﻨﻨﺪ‪ ،‬ﺑﻪ ﺷﺮﺡ ﺯﻳﺮ ﺍﺳﺖ‪:‬‬

‫ﺩﺭ ﻫﻤﻪ ﺭﺍﺑﻄﻪﻫﺎﻯ ﻓﻮﻕ ‪ net‬ﻣﻘﺪﺍﺭ ﺧﺮﻭﺟﻰ ﺍﻭﻟﻴﻪ ﻧﺮﻭﻥ ﺭﺍ ﻧﺸــﺎﻥ ﻣﻰﺩﻫﺪ‪ .‬ﻣﻨﻈﻮﺭ ﺍﺯ‬ ‫ﺧﺮﻭﺟﻰ‪ ،‬ﭘﺎﺳﺦ ﻣﺴﺌﻠﻪ ﻫﺴﺖ‪.‬‬ ‫ﻣﺪﻝ ﻧﺮﻭﻥ‬ ‫ﻫﺮ ﺷــﺒﻜﻪ ﻋﺼﺒﻰ ﺷﺎﻣﻞ ﻳﻚ ﺳــﺮﻯ ﻧﺮﻭﻥﻫﺎﻳﻰ ﻣﻰﺷﻮﺩ ﻛﻪ ﺑﺎ ﻫﻢ ﻣﺮﺗﺒﻄﻨﺪ‪ .‬ﻫﺮ ﻧﺮﻭﻥ‬ ‫ﺭﺍ ﻣﻰﺗــﻮﺍﻥ ﺑﻪ ﻋﻨﻮﺍﻥ ﻳﻚ ﺟﺰء ﻛﻮﭼﻚ ﻭ ﻫــﺮ ﺍﺭﺗﺒﺎﻁ ﺑﻴﻦ ﻧﺮﻭﻥ ﺭﺍ ﺑﻪ ﻋﻨﻮﺍﻥ ﻳﻚ ﻻﻳﻪ‬ ‫ﺗﺼﻮﺭ ﻛﺮﺩ‪ .‬ﺑﻪ ﻋﻼﻭﻩ ﻫﺮ ﻻﻳﻪ‪ ،‬ﻭﺯﻧﻰ ﺩﺍﺭﺩ ﻛﻪ ﺑﻴﺎﻧﮕﺮ ﺁﻥ ﺍﺳﺖ ﻛﻪ ﺩﻭ ﻧﺮﻭﻥ ﺗﺎ ﭼﻪ ﻣﻴﺰﺍﻥ‬ ‫ﺭﻭﻯ ﻳﻜﺪﻳﮕﺮ ﺗﺄﺛﻴﺮ ﻣﻰﮔﺬﺍﺭﻧﺪ‪ .‬ﺑﻨﺎﺑﺮﺍﻳﻦ ﺍﮔﺮ ﻭﺯﻧﻰ ﺯﻳﺎﺩﺗﺮ ﺑﺎﺷﺪ‪ ،‬ﺩﻭ ﻧﺮﻭﻥ ﺭﻭﻯ ﻳﻜﺪﻳﮕﺮ‬ ‫ﺑﻴﺸــﺘﺮﻳﻦ ﺗﺄﺛﻴﺮ ﺭﺍ ﻣﻰﮔﺬﺍﺭﻧﺪ ﻭ ﺳــﻴﮕﻨﺎﻝ ﻗﻮﻯﺗﺮﻯ ﻣﻰﺗﻮﺍﻧﺪ ﺍﺯ ﺍﻳــﻦ ﻻﻳﻪ ﻋﺒﻮﺭ ﻛﻨﺪ‪.‬‬ ‫ﺑﻪ ﻃﻮﺭ ﻛﻠﻰ ﻧﺮﻭﻥ ﻛﻮﭼﻚﺗﺮﻳﻦ ﻭﺍﺣﺪ ﭘﺮﺩﺍﺯﺷــﮕﺮ ﺍﻃﻼﻋﺎﺕ ﺍﺳــﺖ ﻛﻪ ﺍﺳﺎﺱ ﻋﻤﻠﻜﺮﺩ‬ ‫ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﺭﺍ ﺗﺸﻜﻴﻞ ﻣﻰﺩﻫﺪ‪.‬‬

‫ﺍﻧﻮﺍﻉ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﺍﺯ ﻧﻈﺮ ﺑﺮﮔﺸﺖﭘﺬﻳﺮﻯ‬

‫ﺷــﺒﻜﻪﻫﺎﻯ ﭘﻴﺶﺧــﻮﺭ‪ :8‬ﺩﺭ ﻳﻚ ﺷــﺒﻜﻪ ﭘﻴﺶﺧﻮﺭ ﺟﺮﻳﺎﻥ ﺍﻃﻼﻋــﺎﺕ ﺑﻪ ﺻﻮﺭﺕ ﺧﻂ‬ ‫ﻣﺴــﺘﻘﻴﻢ ﺍﺯ ﻻﻳﻪ ﻭﺭﻭﺩﻯ ﻳﺎ ﺍﺯ ﻻﻳﻪ ﻣﻴﺎﻧﻰ )ﭘﻨﻬﺎﻧﻰ( ﺑﻪ ﻃﺮﻑ ﻻﻳﻪ ﺧﺮﻭﺟﻰ ﻣﻰﺑﺎﺷــﺪ ﻭ‬ ‫ﺑﺎﺯﺧﻮﺭﻯ ﺑﻴﻦ ﻻﻳﻪ ﺧﺮﻭﺟﻰ ﻭ ﺩﻳﮕﺮ ﻻﻳﻪﻫﺎ ﻭﺟﻮﺩ ﻧﺪﺍﺭﺩ ﻭ ﻧﻴﺰ ﺑﻴﻦ ﺍﻋﺼﺎﺏ ﻫﺮ ﻻﻳﻪ ﻧﻴﺰ‬ ‫ﺍﺭﺗﺒﺎﻃﻰ ﻭﺟﻮﺩ ﻧﺪﺍﺭﺩ‪ .‬ﺳﺎﺩﻩﺗﺮﻳﻦ ﺍﻳﻦ ﺷﺒﻜﻪﻫﺎ ﺷﺒﻜﻪﻫﺎﻯ ﭘﺮﺳﭙﺘﺮﻭﻥ ﻫﺴﺘﻨﺪ‪.‬‬

‫ﺷﻜﻞ‪ :4‬ﺷﺒﻜﻪ ﭘﻴﺶﺧﻮﺭ‬ ‫‪9‬‬

‫ﺷــﺒﻜﻪﻫﺎﻯ ﭘﺲﺧﻮﺭ )ﺑﺮﮔﺸﺘﻰ( ‪ :‬ﺗﻔﺎﻭﺕ ﺷﺒﻜﻪﻫﺎﻯ ﭘﺲﺧﻮﺭ ﺑﺎ ﺷﺒﻜﻪﻫﺎﻯ ﭘﻴﺶﺧﻮﺭ‬ ‫ﺩﺭ ﺁﻥ ﺍﺳﺖ ﻛﻪ ﺩﺭ ﺷﺒﻜﻪﻫﺎﻯ ﭘﺲﺧﻮﺭ ﺣﺪﺍﻗﻞ ﻳﻚ ﺳﻴﮕﻨﺎﻝ ﺑﺮﮔﺸﺘﻰ ﺍﺯ ﻳﻚ ﻧﺮﻭﻥ ﺑﻪ‬ ‫ﻫﻤﺎﻥ ﻧﺮﻭﻥ ﻳﺎ ﻧﺮﻭﻥﻫﺎﻯ ﻫﻤﺎﻥ ﻻﻳﻪ ﻳﺎ ﻧﺮﻭﻥﻫﺎﻯ ﻻﻳﻪﻫﺎﻯ ﻗﺒﻞ ﻭﺟﻮﺩ ﺩﺍﺭﺩ‪ .‬ﺷﺒﻜﻪﻫﺎﻯ‬ ‫ﺑﺮﮔﺸــﺘﻰ ﺑﻬﺘﺮ ﻣﻰﺗﻮﺍﻧﻨﺪ ﺭﻓﺘﺎﺭ ﻣﺮﺑﻮﻁ ﺑﻪ ﻭﻳﮋﮔﻰﻫﺎﻯ ﺯﻣﺎﻧﻰ ﻭ ﭘﻮﻳﺎﻳﻰ ﺳﻴﺴــﺘﻢﻫﺎ ﺭﺍ‬ ‫ﻧﺸﺎﻥ ﺩﻫﻨﺪ‪ .‬ﺳﺎﺩﻩﺗﺮﻳﻦ ﺍﻳﻦ ﺷﺒﻜﻪﻫﺎ‪ ،‬ﺷﺒﻜﻪ ﻫﺎﭘﻔﻴﻠﺪ ﻣﻰﺑﺎﺷﺪ‪.‬‬

‫ﺷﻜﻞ‪ :5‬ﺷﺒﻜﻪ ﭘﺲﺧﻮﺭ‬ ‫‪8 .Feedforward Networks‬‬ ‫‪9 .Recurrent Networks‬‬

‫‪18‬‬ ‫ﺳﺎﻝ ﺑﻴﺴﺘﻢ ﺷﻤﺎﺭﻩ ‪127‬‬


‫ﭘﺮﺳﭙﺘﺮﻭﻥ‬ ‫ﻛﻪ ﻓﺮﺍﻧــﻚ ﺭﻭﺯﻧﺒﻼﺕ ﺍﺯ‬ ‫ﻳﻜﻰ ﺍﺯ ﺍﻧﻮﺍﻉ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ‪،‬‬ ‫ﺁﺯﻣﺎﻳﺸﮕﺎﻩ ﭘﺮﻭﺍﺯ ﻛﺮﻧﻞ‪ ،‬ﺍﻳﻦ ﻣﺪﻝ ﻣﺤﺎﺳﺒﺎﺗﻰ ﺭﺍ ﺑﻪ ﻧﺎﻡ »ﭘﺮﺳﭙﺘﺮﻭﻥ« ﺍﻳﺠﺎﺩ ﻧﻤﻮﺩ‪ .‬ﺍﻳﻨﻬﺎ‬ ‫ﺑﻪ ﺻﻮﺭﺕ ﭘﺮﺳﭙﺘﺮﻭﻥﻫﺎﻯ ﺗﻚﻻﻳﻪ ﻭ ﭼﻨﺪﻻﻳﻪ ﻣﻮﺟﻮﺩ ﻫﺴﺘﻨﺪ‪ .‬ﭘﺮﺳﭙﺘﺮﻭﻥ ﺗﻚﻻﻳﻪ ﺗﻨﻬﺎ‬ ‫ﻣﻰﺗﻮﺍﻧﺪ ﻣﺴﺎﺋﻞ ﻣﺠﺰﺍﻯ ﺧﻄﻰ ﺭﺍ ﺩﺳﺘﻪﺑﻨﺪﻯ ﻛﻨﺪ ﻭ ﺑﺮﺍﻯ ﻣﺴﺎﺋﻞ ﭘﻴﭽﻴﺪﻩﺗﺮ ﻻﺯﻡ ﺍﺳﺖ‬ ‫ﻛﻪ ﺍﺯ ﺗﻌﺪﺍﺩ ﺑﻴﺸﺘﺮﻯ ﻻﻳﻪ ﺍﺳﺘﻔﺎﺩﻩ ﻛﻨﻴﻢ‪.‬‬ ‫ﻋﻤﻮﻣﻰﺗﺮﻳﻦ ﺷــﻜﻞ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ‪ ،‬ﺷﺒﻜﻪﻫﺎﻯ ﭘﺮﺳــﭙﺘﺮﻭﻥ ﭼﻨﺪﻻﻳﻪ )‪(MPL‬‬ ‫ﻫﺴﺘﻨﺪ ﻛﻪ ﻳﻚ ﭘﺮﺳﭙﺘﺮﻭﻥ ﭼﻨﺪﻻﻳﻪ‪:‬‬ ‫‪ -1‬ﺍﺯ ﺗﻌﺪﺍﺩﻯ ﭼﻨﺪ ﻭﺭﻭﺩﻯ ﺗﺸﻜﻴﻞ ﻳﺎﻓﺘﻪ ﺍﺳﺖ‪.‬‬ ‫‪ -2‬ﻳﻚ ﻳﺎ ﭼﻨﺪ ﻻﻳﻪ ﻣﻴﺎﻧﻰ )ﭘﻨﻬﺎﻥ( ﺩﺍﺭﺩ‪.‬‬ ‫‪ -3‬ﺍﺯ ﺗﻮﺍﺑﻊ ﺗﺮﻛﻴﺒﻰ ﺧﻄﻰ ﺩﺭ ﻻﻳﻪﻫﺎﻯ ﻭﺭﻭﺩﻯ ﺍﺳﺘﻔﺎﺩﻩ ﻣﻰﻛﻨﺪ‪.‬‬ ‫‪ -4‬ﺩﺭ ﻻﻳﻪﻫﺎﻯ ﻣﻴﺎﻧﻰ )ﭘﻨﻬﺎﻥ( ﻣﻌﻤﻮﻻً ﺍﺯ ﺗﻮﺍﺑﻊ ﻓﻌﺎﻟﺴــﺎﺯﻯ ﺳــﻴﮕﻤﻮﺋﻴﺪ‪ 11‬ﺍﺳــﺘﻔﺎﺩﻩ‬ ‫ﻣﻰﻛﻨﺪ‪.‬‬ ‫‪ -5‬ﺩﺍﺭﺍﻯ ﺗﻌﺪﺍﺩﻯ ﺧﺮﻭﺟﻰ ﺑﺎ ﺗﻮﺍﺑﻊ ﻓﻌﺎﻟﺴﺎﺯﻯ ﻣﺨﺘﻠﻒ ﺍﺳﺖ‪.‬‬ ‫‪ -6‬ﺑﻴﻦ ﻻﻳﻪ ﻭﺭﻭﺩﻯ ﻭ ﺍﻭﻟﻴﻦ ﻻﻳﻪ ﻣﻴﺎﻧﻰ ﻭ ﺑﻴﻦ ﻻﻳﻪﻫﺎﻯ ﭘﻨﻬﺎﻥ ﻭ ﻧﻴﺰ ﺑﻴﻦ ﺁﺧﺮﻳﻦ ﻻﻳﻪ‬ ‫ﭘﻨﻬﺎﻥ ﻭ ﻻﻳﻪ ﺧﺮﻭﺟﻰ ﺍﺭﺗﺒﺎﻁ ﻭﺟﻮﺩ ﺩﺍﺭﺩ‪.‬‬ ‫ﺷﻜﻞ ﻳﻚ ﻋﺼﺐ ﭘﺮﺳﭙﺘﺮﻭﻥ ﻫﻤﺎﻧﻨﺪ ﺷﻜﻞ‪ 4‬ﻣﻰﺑﺎﺷﺪ‪.‬‬ ‫ﺷﺒﻜﻪ ﻫﺎﭘﻔﻴﻠﺪ‬ ‫ﺷــﺒﻜﻪ ﻫﺎﭘﻔﻴﻠﺪ‪12‬‬ ‫ﻳﻚ ﺳﻴﺴﺘﻢ ﺑﺎﺯﺧﻮﺭﺩ ﭼﻨﺪ ﺣﻠﻘﻮﻯ ﺍﺳــﺖ ﻭ ﺷﺒﻜﻪﺍﻯ ﭘﺲﺧﻮﺭ ﻳﺎ‬ ‫ﺑﺮﮔﺸــﺘﻰ ﻣﻰﺑﺎﺷﺪ‪ .‬ﺷــﺒﻜﻪ ﻫﺎﭘﻔﻴﻠﺪ ﺑﻪ ﺗﻌﺪﺍﺩ ﻭ ﺣﺠﻢ ﺍﻃﻼﻋﺎﺕ ﺟﺬﺏ ﺷﺪﻩ ﻭ ﺣﺎﻓﻈﻪ‬ ‫ﺍﻧﺠﻤﻨــﻰ ﺗﻮﺟﻪ ﺩﺍﺭﺩ ﻛﻪ ﮔﻔﺘﻪ ﻣﻰﺷــﻮﺩ‪ :‬ﻣﻌﻤﺎﺭﻯ ﺣﺎﻓﻈﻪ ﻣﻰﺗﻮﺍﻧــﺪ ﺑﺎ ﺣﺠﻢ ﻛﻤﻰ ﺍﺯ‬ ‫ﺍﻃﻼﻋﺎﺕ ﺷﻨﺎﺳﺎﻳﻰ ﺷﻮﺩ ﻛﻪ ﺍﻳﻦ ﺷﺒﻴﻪ ﺑﻪ ﻛﺎﺭ ﻣﻐﺰ ﺩﺭ ﺯﻣﺎﻥ ﻳﺎﺩﺁﻭﺭﻯ ﻣﻰﺑﺎﺷﺪ‪ .‬ﻣﺎ ﺍﻓﺮﺍﺩ‬ ‫ﺭﺍ ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﻣﻮ‪ ،‬ﭼﺸﻢ‪ ،‬ﺻﺪﺍ ﻭ ﺩﻳﮕﺮ ﻋﻼﺋﻢ ﺑﻪ ﺧﺎﻃﺮ ﻣﻰﺳﭙﺎﺭﻳﻢ‪.‬‬ ‫ﺷــﻜﻞ ﺯﻳﺮ ﭘﻴﻮﺳﺘﮕﻰ ﺳﺎﺧﺘﻤﺎﻥ ﺷــﺒﻜﻪ ﻫﺎﭘﻔﻴﻠﺪ ﺭﺍ ﻧﺸــﺎﻥ ﻣﻰﺩﻫﺪ‪ .‬ﻫﻤﻪ ﻭﺍﺣﺪﻫﺎ ﺑﻪ‬ ‫ﻭﺍﺣﺪﻫــﺎﻯ ﺩﻳﮕﺮ ﻣﺮﺗﺒﻂ ﺍﺳــﺖ ﻭ ﻫﻴﭻ ﻻﻳﻪﺍﻯ ﻣﻘﺪﻡ ﺑﺮ ﻻﻳﻪ ﺩﻳﮕﺮ ﻧﻴﺴــﺖ‪ .‬ﺑﻪ ﻋﻼﻭﻩ‪،‬‬ ‫ﺍﺭﺗﺒﺎﻃﺎﺕ‪ ،‬ﺩﻭ ﻃﺮﻓﻪ ﻭ ﻣﺘﻘﺎﺭﻥ ﻫﺴــﺘﻨﺪ‪ .‬ﻳﻚ ﻭﺯﻥ ﻫﻢ ﺑﺮﺍﻯ ﻫﺮ ﺍﺗﺼﺎﻝ ﺗﻌﻴﻴﻦ ﻣﻰﺷﻮﺩ‬ ‫ﻛﻪ ﺩﺭ ﻫﺮ ﺩﻭ ﺟﻬﺖ ﻳﻜﺴﺎﻥ ﻣﻰﺑﺎﺷﺪ‪.‬‬ ‫ﭘﺮﺳــﭙﺘﺮﻭﻥ‪ 10‬ﻣﻰﺑﺎﺷــﺪ‬

‫ﺷﺒﻜﻪﻫﺎﻯ ‪13 RBF‬‬

‫ﺷــﺒﻜﻪﻫﺎﻯ ‪ RBF‬ﺍﺯ ﻧﻮﻉ ﺷﺒﻜﻪﻫﺎﻯ ﭘﻴﺶﺧﻮﺭ ﻫﺴﺘﻨﺪ‪ ،‬ﻟﻴﻜﻦ ﻓﻘﻂ ﺑﺎ ﻳﻚ ﻻﻳﻪ ﻣﻴﺎﻧﻰ‪.‬‬ ‫ﻳﻚ ﺷﺒﻜﻪ ‪:RBF‬‬ ‫ ﺩﺍﺭﺍﻯ ﺗﻌﺪﺍﺩﻯ ﻭﺭﻭﺩﻯ ﺍﺳﺖ‪.‬‬‫ ﻓﻘﻂ ﺩﺍﺭﺍﻯ ﻳﻚ ﻻﻳﻪ ﻣﻴﺎﻧﻰ ﻣﻰﺑﺎﺷﺪ‪.‬‬‫ ﺍﺯ ﺗﺎﺑﻊ ﺗﺮﻛﻴﺒﻰ ﺭﺍﺩﻳﺎﻟﻰ )ﺷﻌﺎﻋﻰ( ﺩﺭ ﻻﻳﻪ ﻣﻴﺎﻧﻰ ﺍﺳﺘﻔﺎﺩﻩ ﻣﻰﻛﻨﺪ‪.‬‬‫ ﺩﺭ ﻻﻳﻪ ﭘﻨﻬﺎﻥ ﺍﺯ ﺗﺎﺑﻊ ﻓﻌﺎﻟﺴــﺎﺯﻯ ﻧﻤﺎﻳﻰ ﻳﺎ ‪ softmax‬ﺍﺳــﺘﻔﺎﺩﻩ ﻣﻰﻛﻨﺪ ﻛﻪ ﺷﺒﻜﻪ‬‫‪ RBF‬ﻳﻚ ﺷﺒﻜﻪ ﮔﺎﻭﺳﻰ ﺍﺳﺖ‪.‬‬ ‫‪10 .Percepetron‬‬ ‫‪11 .Sigmoidal Function‬‬ ‫‪12 .Hopfield Network‬‬ ‫‪13 .Radial Basis Function Networks‬‬

‫ ﺩﺍﺭﺍﻯ ﺗﻌﺪﺍﺩﻯ ﺧﺮﻭﺟﻰ ﺑﺎ ﺍﻧﻮﺍﻉ ﺗﻮﺍﺑﻊ ﻓﻌﺎﻟﺴﺎﺯﻯ ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﻣﻮﺭﺩ ﻣﺴﺌﻠﻪ ﻣﻰﺑﺎﺷﺪ‪.‬‬‫ ﻳﻚ ﺭﺍﺑﻄﻪ ﭘﻴﺶﺧﻮﺭ ﺑﻴﻦ ﻻﻳﻪ ﻭﺭﻭﺩﻯ ﻭ ﺧﺮﻭﺟﻰ ﻭﺟﻮﺩ ﺩﺍﺭﺩ‪.‬‬‫ﺷﺒﻜﻪ ﻋﺼﺒﻰ ﻛﻮﻫﻦ‬ ‫ﺷــﺒﻜﻪﻫﺎﻯ ﺗﺮﺳــﻴﻢ ﻧﻘﺸﻪ ﺧﻮﺩﺳــﺎﺯﻣﺎﻧﺪﻫﻰ )ﺷــﺒﻜﻪﻫﺎﻯ ﺧﻮﺩ ﺳــﺎﺯﻣﺎﻧﺪﻩ ﻳﺎ ﻛﻮﻫﻦ‬ ‫ﻳــﺎ ‪ (SOFM‬ﻛﺎﻣ ً‬ ‫ﻼ ﻣﺘﻔﺎﻭﺕ ﺍﺯ ﺍﻧﻮﺍﻉ ﺩﻳﮕﺮ ﺷــﺒﻜﻪﻫﺎ ﻣﻰﺑﺎﺷــﻨﺪ‪ .‬ﺍﺯ ﺁﻧﺠﺎﻳﻰ ﻛﻪ ﺩﻳﮕﺮ‬ ‫ﺷــﺒﻜﻪﻫﺎ ﻳﺎﺩﮔﻴﺮﻳﺸﺎﻥ ﺑﻪ ﺻﻮﺭﺕ ﻧﻈﺎﺭﺕ ﺷﺪﻩ ﻣﻰﺑﺎﺷــﺪ‪ ،‬ﺷﺒﻜﻪﻫﺎﻯ ‪ SOFM‬ﺗﺤﺖ‬ ‫ﻳﺎﺩﮔﻴﺮﻯ ﺑﺪﻭﻥ ﻧﻈﺎﺭﺕ ﻃﺮﺍﺣﻰ ﺷﺪﻩﺍﻧﺪ )‪.(1996 ,Patterson‬‬ ‫ﺩﺭ ﺍﻭﻟﻴﻦ ﻧﮕﺎﻩ ﻋﺠﻴﺐ ﺑﻪ ﻧﻈﺮ ﻣﻰﺭﺳﺪ؛ ﺑﺪﻭﻥ ﺩﺍﺷﺘﻦ ﺧﺮﻭﺟﻰ ﺷﺒﻜﻪ ﭼﮕﻮﻧﻪ ﻳﺎﺩﮔﻴﺮﻯ‬ ‫ﻣﻰﻛﻨﺪ؟ ﺟﻮﺍﺏ ﺍﻳﻦ ﺍﺳﺖ ﻛﻪ ﺷﺒﻜﻪ ‪ SOFM‬ﺳﻌﻰ ﺩﺭ ﻳﺎﺩﮔﻴﺮﻯ ﺳﺎﺧﺘﺎﺭ ﺩﺍﺩﻩﻫﺎ ﺩﺍﺭﺩ‪.‬‬ ‫ﺍﻭﻟﻴﻦ ﺍﺣﺘﻤﺎﻝ ﻣﻮﺟﻮﺩ ﺑﺮﺍﻯ ﺍﺳــﺘﻔﺎﺩﻩ ﺍﺯ ﺍﻳﻨﻬﺎ‪ ،‬ﺗﺠﺰﻳﻪ ﻭ ﺗﺤﻠﻴﻞ ﺍﻛﺘﺸﺎﻓﻰ ﺩﺍﺩﻩﻫﺎﺳﺖ ﻭ‬ ‫ﺩﻭﻣﻴﻦ ﺍﺣﺘﻤﺎﻝ ﻛﺸــﻒ ﻭ ﻳﺎﻓﺘﻦ ﭼﻴﺰﻫﺎﻯ ﺟﺪﻳﺪ ﻣﻰﺑﺎﺷﺪ‪ .‬ﻳﻚ ﺷﺒﻜﻪ ‪ SOFM‬ﻓﻘﻂ‬ ‫ﺩﻭ ﻻﻳﻪ ﺩﺍﺭﺩ؛ ﻻﻳﻪ ﻭﺭﻭﺩﻯ ﻭ ﻻﻳﻪ ﺧﺮﻭﺟﻰ‪.‬‬ ‫ﺣﺎﻓﻈــﻪ ﺍﻧﺠﻤﻨــﻰ ﻭ ﻗﺎﺑﻠﻴﺖ ﺣــﻞ ﻣﺴــﺎﻳﻞ ﺑﻬﻴﻨﻪﺳــﺎﺯﻯ ﺩﻭ ﻣﺰﻳﺖ ﻋﻤــﺪﻩ ﺍﻳﻦ ﻧﻮﻉ‬ ‫ﺷﺒﻜﻪﻫﺎﺳﺖ‪ .‬ﺣﺎﻓﻈﻪ ﺍﻧﺠﻤﻨﻰ ﺑﻪ ﺍﻳﻦ ﻣﻌﻨﺎﺳﺖ ﻛﻪ ﻣﺎ ﺧﺼﻮﺻﻴﺎﺕ ﻭ ﻭﻳﮋﮔﻰﻫﺎﻯ ﭼﻴﺰﻯ‬ ‫ﺭﺍ ﺑﮕﻮﻳﻴﻢ ﻭ ﺳﻴﺴــﺘﻢ ﺑﺘﻮﺍﻧﺪ ﺩﻳﮕﺮ ﻭﻳﮋﮔﻰﻫﺎﻯ ﺁﻥ ﺭﺍ ﺑﻪ ﻣﺎ ﻧﺸــﺎﻥ ﺩﻫﺪ‪ .‬ﻫﺎﭘﻔﻴﻠﺪ ﻧﺸﺎﻥ‬ ‫ﺩﺍﺩ ﻛﻪ ﺍﻳﻦ ﺷﺒﻜﻪ ﺑﻪ ﺳﻤﺖ ﻭﺿﻌﻴﺘﻰ ﭘﺎﻳﺪﺍﺭ ﺳﻴﺮ ﻣﻰﻛﻨﺪ ﻭ ﻫﻢﮔﺮﺍﺳﺖ‪.‬‬ ‫ﺷــﺒﻜﻪ ﻛﻮﻫﻦ‪ ،‬ﻳﻜﻰ ﺍﺯ ﺍﻧﻮﺍﻉ ﺷــﺒﻜﻪﻫﺎﻯ ﺭﻗﺎﺑﺘﻰ ﻫﺴــﺘﻨﺪ‪ .‬ﺷــﺒﻜﻪ ﻛﻮﻫﻦ ﺗﻨﻬﺎ ﺷﺎﻣﻞ‬ ‫ﺩﻭ ﻻﻳﻪ ﻣﻰﺷــﻮﺩ‪ ،‬ﻻﻳــﻪ ﻭﺭﻭﺩﻯ ﻭ ﻻﻳﻪ ﺧﺮﻭﺟﻰ ﺭﻗﺎﺑﺘﻰ‪ .‬ﻫﺮ ﻧــﺮﻭﻥ ﺩﺭ ﻻﻳﻪ ﻭﺭﻭﺩﻯ ﺑﻪ‬ ‫ﻧﺮﻭﻥﻫﺎﻯ ﻻﻳﻪ ﺭﻗﺎﺑﺘﻰ ﻭﺻﻞ ﺷــﺪﻩ ﺍﺳــﺖ‪ .‬ﺩﺭ ﺿﻤﻦ‪ ،‬ﻫﺮ ﻧﺮﻭﻥ ﺩﺭ ﻻﻳﻪ ﺭﻗﺎﺑﺘﻰ ﻣﻤﻜﻦ‬ ‫ﺍﺳﺖ ﺑﻪ ﻫﻤﻪ ﻧﺮﻭﻥﻫﺎﻯ ﺭﻗﺎﺑﺘﻰ ﺩﻳﮕﺮ ﻣﺘﺼﻞ ﺷﺪﻩ ﺑﺎﺷﺪ‪.‬‬

‫ﻳﺎﺩﮔﻴﺮﻯ‬ ‫ﻳﺎﺩﮔﻴﺮﻯ ﻋﺒﺎﺭﺕ ﺍﺳﺖ ﺍﺯ ﻓﺮﺍﻳﻨﺪ ﺗﻌﺪﻳﻞ ﺍﻭﺯﺍﻥ ﺍﺭﺗﺒﺎﻃﻰ ﺩﺭ ﻳﻚ ﺷﺒﻜﻪ ﻋﺼﺒﻰ ﻣﺼﻨﻮﻋﻰ‪،‬‬ ‫ﺑﻪ ﮔﻮﻧﻪﺍﻯ ﻛﻪ ﺷﺒﻜﻪ ﺑﺘﻮﺍﻧﺪ ﺑﻪ ﻫﻨﮕﺎﻡ ﺩﺭﻳﺎﻓﺖ ﺑﺮﺩﺍﺭ ﻣﺤﺮﻙ ﺗﻮﺳﻂ ﻻﻳﻪ ﻭﺭﻭﺩﻯ‪ ،‬ﺑﺮﺩﺍﺭ‬ ‫ﺧﺮﻭﺟﻰ ﺩﻟﺨﻮﺍﻩ ﺭﺍ ﺑﻪ ﻋﻨﻮﺍﻥ ﭘﺎﺳﺦ ﺗﻮﻟﻴﺪ ﻛﻨﺪ‪ .‬ﺍﻧﻮﺍﻉ ﻳﺎﺩﮔﻴﺮﻯ ﻋﺒﺎﺭﺗﻨﺪ ﺍﺯ‪:‬‬ ‫ﻳﺎﺩﮔﻴﺮﻯ ﺑﺎ ﻧﺎﻇﺮ )ﺑﺎ ﻧﻈﺎﺭﺕ(‬ ‫ﺩﺭ ﻳﺎﺩﮔﻴﺮﻯ ﺑﺎ ﻧﺎﻇﺮ‪ ،14‬ﻫﻨﮕﺎﻣﻰ ﻛﻪ ﻭﺭﻭﺩﻯ ﺑﻪ ﺷــﺒﻜﻪ ﺍﻋﻤﺎﻝ ﻣﻰﺷــﻮﺩ ﺟﻮﺍﺏ ﺷﺒﻜﻪ‬ ‫ﺑﺎ ﺟﻮﺍﺏ ﻫﺪﻓﻰ ﻛﻪ ﻣﺎ ﺑﺮﺍﻯ ﺷــﺒﻜﻪ ﺗﻌﻴﻴﻦ ﻛﺮﺩﻩﺍﻳﻢ ﻣﻘﺎﻳﺴــﻪ ﻣﻰﺷﻮﺩ ﻭ ﺳﭙﺲ ﺧﻄﺎﻯ‬ ‫ﻳﺎﺩﮔﻴﺮﻯ ﻣﺤﺎﺳﺒﻪ ﺷﺪﻩ ﻭ ﺍﺯ ﺁﻥ ﺑﺮﺍﻯ ﺗﻨﻈﻴﻢ ﭘﺎﺭﺍﻣﺘﺮﻫﺎﻯ ﺷﺒﻜﻪ ﺍﺳﺘﻔﺎﺩﻩ ﻣﻰﺷﻮﺩ‪.‬‬ ‫ﻳﺎﺩﮔﻴﺮﻯ ﺷﺒﻜﻪﻫﺎﻯ ﻫﺎﭘﻔﻴﻠﺪ‪ ،‬ﻳﺎﺩﮔﻴﺮﻯ ﺑﺮ ﭘﺎﻳﻪ ﺍﺻﻮﻝ ﻫﺐ ﻭ ﻧﻘﺸﻪ ﺧﻮﺩﺳﺎﺯﻣﺎﻧﺪﻫﻰ ﺍﺯ‬ ‫ﺟﻤﻠﻪ ﻳﺎﺩﮔﻴﺮﻯﻫﺎﻯ ﺑﺎ ﻧﺎﻇﺮ ﻫﺴﺘﻨﺪ‪.‬‬ ‫ﻳﺎﺩﮔﻴﺮﻯ ﺗﻘﻮﻳﺘﻰ‬ ‫ﻛﻪ ﻧﻮﻉ ﺧﺎﺻﻰ ﺍﺯ ﻳﺎﺩﮔﻴﺮﻯ ﺑﺎ ﻧﺎﻇﺮ ﺍﺳــﺖ ﻛﻪ ﺳﻴﺴﺘﻢ ﺭﺍ ﺑﻪ ﻭﺳﻴﻠﻪ ﺍﺭﺍﺋﻪ ﻣﻴﺰﺍﻥ ﺻﺤﺖ‬ ‫ﭘﺎﺳﺦﻫﺎﻯ ﺳﻴﺴــﺘﻢ ﺁﻣﻮﺯﺵ ﻣﻰﺩﻫﺪ )ﺍﻣﺎ ﺟﻮﺍﺏ ﺻﺤﻴﺢ ﺭﺍ ﺍﺭﺍﺋﻪ ﻧﻤﻰﺩﻫﺪ(‪ .‬ﺍﻳﻦ ﺭﻭﺵ‬ ‫ﺩﺭ ﻭﺍﻗﻊ ﻧﻤﺮﻩ ﺩﺍﺩﻥ ﺑﻪ ﭘﺎﺳﺦ ﺳﻴﺴﺘﻢ ﺍﺳﺖ‪.‬‬ ‫ﻳﺎﺩﮔﻴﺮﻯ ﺑﺪﻭﻥ ﻧﺎﻇﺮ‬ ‫‪15‬‬ ‫ﺍﻳﻦ ﻧﻮﻉ ﻳﺎﺩﮔﻴﺮﻯ ﻫﻴﭻ ﺍﻃﻼﻋﺎﺗﻰ ﺭﺍ ﺑﻪ ﺳﻴﺴﺘﻢ ﻧﻤﻰﺩﻫﺪ‪ .‬ﺩﺭ ﻳﺎﺩﮔﻴﺮﻯ ﺑﺪﻭﻥ ﻧﺎﻇﺮ ‪،‬‬ ‫ﻫﺪﻑ ﺍﻳﻦ ﻧﻴﺴــﺖ ﻛﻪ ﺷﺒﻜﻪ ﭘﺎﺳــﺦ ﺻﺤﻴﺢ ﺭﺍ ﻳﺎﺩ ﺑﮕﻴﺮﺩ‪ ،‬ﺑﻠﻜﻪ ﭘﺎﺳﺦﻫﺎ ﺭﺍ ﻃﺒﻘﻪﺑﻨﺪﻯ‬ ‫ﻣﻰﻛﻨﺪ ﻭ ﭘﺎﺳــﺦ ﺑﻪ ﺗﻮﺍﻧﺎﻳﻰ ﺷــﺒﻜﻪ ﺑﺮﺍﻯ ﺳﺎﺯﻣﺎﻧﺪﻫﻰ ﺧﻮﻳﺶ ﺑﺴــﺘﮕﻰ ﺩﺍﺭﺩ‪ .‬ﺑﻪ ﻃﻮﺭ‬ ‫ﻣﺜﺎﻝ‪ ،‬ﺷــﻤﺎ ﻣﻰﺧﻮﺍﻫﻴﺪ ﺳﻴﺴﺘﻤﻰ ﺍﻳﺠﺎﺩ ﻛﻨﻴﺪ ﻛﻪ ﻭﺭﻭﺩﻯﻫﺎ ﺭﺍ ﺑﻪ ﺷﺶ ﺩﺳﺘﻪ ﺗﻘﺴﻴﻢ‬ ‫ﻛﻨﺪ؛ ﭼﮕﻮﻧﮕﻰ ﺩﺳﺘﻪﺑﻨﺪﻯ ﻭﺭﻭﺩﻯﻫﺎ ﺑﻪ ﻳﺎﺩﮔﻴﺮﻯ ﺳﻴﺴﺘﻢ ﺑﺮﻣﻰﮔﺮﺩﺩ‪.‬‬ ‫ﻳﺎﺩﮔﻴﺮﻯ ﺷــﺒﻜﻪ ﭘﺮﺳﭙﺘﺮﻭﻥ‪ ،‬ﻗﺎﻧﻮﻥ ﺩﻟﺘﺎ ﻭ ﻳﺎﺩﮔﻴﺮﻯ ﭘﺲﺍﻧﺘﺸﺎﺭ ﺍﺯ ﺟﻤﻠﻪ ﻳﺎﺩﮔﻴﺮﻯﻫﺎﻯ‬ ‫ﺑﺪﻭﻥ ﻧﺎﻇﺮ ﻫﺴﺘﻨﺪ‪.‬‬

‫‪14 .Supervised Learning‬‬ ‫‪15 .Unsupervised Learning‬‬

‫‪19‬‬ ‫ﺳﺎﻝ ﺑﻴﺴﺘﻢ ﺷﻤﺎﺭﻩ ‪127‬‬


‫ﻣﺰﺍﻳﺎ ﻭ ﻭﻳﮋﮔﻰﻫﺎﻯ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻣﺼﻨﻮﻋﻰ‬ ‫ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﺑﻪ ﺩﻟﻴﻞ ﭘﺮﺩﺍﺯﺵ ﻣﻮﺍﺯﻯ‪ ،‬ﺍﺯ ﺳﺮﻋﺖ ﺑﺎﻻﻳﻰ ﺑﺮﺧﻮﺭﺩﺍﺭﻧﺪ‪ .‬ﭘﺮﺩﺍﺯﺵ‬‫ﻣﻮﺍﺯﻯ ﺍﻃﻼﻋﺎﺕ ﺩﺭ ﻣﻐﺰ ﺑﺪﻳﻦ ﺻﻮﺭﺕ ﺍﺳــﺖ ﻛﻪ ﻫﺮ ﻛﺪﺍﻡ ﺍﺯ ﺍﻋﻤﺎﻝ ﺩﻳﺪﻥ‪ ،‬ﺷــﻨﻴﺪﻥ‪،‬‬ ‫ﻟﻤــﺲ ﻛــﺮﺩﻥ ﻭ ﻏﻴﺮﻩ ﻣﻰﺗﻮﺍﻧﻨﺪ ﻣﺴــﺘﻘﻞ ﻭ ﻫﻤﺰﻣــﺎﻥ ﺍﻧﺠﺎﻡ ﺷــﻮﻧﺪ‪ .‬ﻛﺎﻣﭙﻴﻮﺗﺮﻫﺎ ﻫﻢ‬ ‫ﻣﻰﺗﻮﺍﻧﻨﺪ ﺁﻧﻘﺪﺭ ﺳﺮﻳﻊ ﺷﻮﻧﺪ ﺗﺎ ﺑﻪ ﺭﻭﺵ ﺳﺮﻳﺎﻝ ﺍﻋﻤﺎﻝ ﺩﻳﺪﻥ‪ ،‬ﻟﻤﺲ ﻛﺮﺩﻥ‪ ،‬ﻓﻜﺮ ﻛﺮﺩﻥ‬ ‫ﻭ ﻏﻴﺮﻩ ﺭﺍ ﺑﻪ ﺗﺮﺗﻴﺐ ﺍﻧﺠﺎﻡ ﺩﻫﻨﺪ ﻭ ﭼﻮﻥ ﺳــﺮﻋﺖ ﻛﺎﻣﭙﻴﻮﺗﺮﻫﺎ ﺑﺴــﻴﺎﺭ ﺑﺎﻻﺳﺖ‪ ،‬ﺗﺼﻮﺭ ﻣﺎ‬ ‫ﺑﺮ ﺍﻳﻦ ﺍﺳﺖ ﻛﻪ ﺗﻤﺎﻡ ﺍﻋﻤﺎﻝ ﻫﻤﺰﻣﺎﻥ ﺍﻧﺠﺎﻡ ﻣﻰﮔﻴﺮﺩ‪.‬‬ ‫ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﺗﻮﺍﻥ ﺑﺎﻟﻘﻮﻩﺍﻯ ﺑﺮﺍﻯ ﺣﻞ ﻣﺴــﺎﺋﻠﻰ ﺩﺍﺭﻧﺪ ﻛﻪ ﺷﺒﻴﻪﺳﺎﺯﻯ ﺁﻧﻬﺎ ﺍﺯ‬‫ﻃﺮﻳﻖ ﻣﻨﻄﻘﻰ ﻭ ﻳﺎ ﺳﺎﻳﺮ ﺭﻭﺵﻫﺎ ﻣﺸﻜﻞ ﻭ ﻳﺎ ﻏﻴﺮ ﻣﻤﻜﻦ ﺍﺳﺖ‪.‬‬ ‫ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻫﻤﺎﻧﻨﺪ ﻣﻐﺰ ﺍﻧﺴﺎﻥ ﺑﻪ ﻃﻮﺭ ﭘﻴﻮﺳﺘﻪ ﺩﺭ ﺣﺎﻝ ﻳﺎﺩﮔﻴﺮﻯ ﻭ ﺍﻧﻄﺒﺎﻕ ﺑﺎ‬‫ﻣﺤﻴﻂ ﻫﺴــﺘﻨﺪ‪ .‬ﺑﻪ ﺍﻳﻦ ﻣﻌﻨﻰ ﻛﻪ ﺍﮔﺮ ﺷــﺒﻜﻪ ﺑﺮﺍﻯ ﻳﻚ ﻭﺿﻌﻴﺖ ﺧﺎﺹ ﺁﻣﻮﺯﺵ ﺑﺒﻴﻨﺪ‬ ‫ﻭ ﺗﻐﻴﻴﺮ ﻛﻮﭼﻜﻰ ﺩﺭ ﺷــﺮﺍﻳﻂ ﻣﺤﻴﻄﻰ ﺁﻥ ﺭﺥ ﺩﻫﺪ‪ ،‬ﻣﻰﺗﻮﺍﻧﺪ ﺑﺎ ﺁﻣﻮﺯﺵ ﻣﺨﺘﺼﺮ‪ ،‬ﺑﺮﺍﻯ‬ ‫ﺷﺮﺍﻳﻂ ﺟﺪﻳﺪ ﻧﻴﺰ ﻛﺎﺭﺁﻣﺪ ﺑﺎﺷﺪ‪.‬‬ ‫ ﺩﺭ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ‪ ،‬ﻋﺪﻡ ﻋﻤﻠﻜﺮﺩ ﺻﺤﻴﺢ ﻗﺴﻤﺘﻰ ﺍﺯ ﻧﺮﻭﻥﻫﺎ ﻣﻮﺟﺐ ﺍﺯ ﻛﺎﺭﺍﻓﺘﺎﺩﮔﻰ‬‫ﻛﺎﻣﻞ ﻣﻐﺰ ﻧﻤﻰﺷﻮﺩ ﻭ ﺍﻣﻜﺎﻥ ﺍﺗﺨﺎﺫ ﺗﺼﻤﻴﻢ ﺻﺤﻴﺢ ﻧﻴﺰ ﻭﺟﻮﺩ ﺩﺍﺭﺩ‪.‬‬ ‫ ﺍﻳﻦ ﺭﻭﺵ ﻗﺎﺩﺭ ﺍﺳﺖ ﺑﺮﺍﻯ ﺩﺍﺩﻩﻫﺎ ﺩﺭ ﺷﺮﺍﻳﻂ ﻋﺪﻡ ﺍﻃﻤﻴﻨﺎﻥ )ﺍﻋﻢ ﺍﺯ ﺁﻧﻜﻪ ﻓﺎﺯﻯ ﺑﺎﺷﻨﺪ‬‫ﻭ ﻳــﺎ ﺑﻪ ﻃﻮﺭ ﻧﺎﻗــﺺ ﻭ ﺗﻮﺃﻡ ﺑﺎ ﺩﺭﻳﺎﻓﺖ ‪) noise‬ﺩﺍﺩﻩﻫﺎﻯ ﺩﺍﺭﺍﻯ ﺧﻄﺎ ﺭﺍ ﮔﻮﻳﻨﺪ( ﺟﻮﺍﺏ‬ ‫ﻣﻨﻄﻘﻰ ﺍﺭﺍﺋﻪ ﺩﻫﺪ‪.‬‬

‫ﻣﺤﺪﻭﺩﻳﺖﻫﺎﻯ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻣﺼﻨﻮﻋﻰ‬

‫ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻣﺼﻨﻮﻋﻰ ﻗﺎﺩﺭ ﺑﻪ ﺗﻮﺿﻴﺢ ﻣﻨﻄﻖ ﻭ ﻗﺎﻋﺪﻩ ﻛﺎﺭ ﻧﻴﺴــﺘﻨﺪ ﻭ ﺍﺛﺒﺎﺕ‬ ‫ﺩﺭﺳﺘﻰ ﻧﺘﺎﻳﺞ ﺁﻧﻬﺎ ﺑﺴﻴﺎﺭ ﺩﺷﻮﺍﺭ ﺍﺳﺖ‪.‬‬ ‫ ﻣﺤﺎﺳــﺒﺎﺕ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻣﻌﻤﻮﻻً ﻣﺤﺘﺎﺝ ﻣﻘﺎﺩﻳﺮ ﺯﻳﺎﺩﻯ ﺩﺍﺩﻩ ﺑﺮﺍﻯ ﺁﻣﻮﺯﺵ ﻭ‬ ‫ﺁﺯﻣﻮﻥ ﻣﺪﻝ ﺍﺳﺖ‪.‬‬ ‫ ﺩﺭ ﺣﺎﻟﺖ ﻛﻠﻰ‪ ،‬ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﺑﺮﺍﻯ ﺑﺮﺧﻰ ﺍﺯ ﻣﺴﺎﺋﻞ ﻛﺎﺭﺁﻳﻰ ﻧﺪﺍﺭﻧﺪ‪ .‬ﺑﺮﺍﻯ ﻣﺜﺎﻝ‪،‬‬ ‫ﺑﺮﺍﻯ ﺣﻞ ﻣﺴﺎﺋﻞ ﻭ ﭘﺮﺩﺍﺯﺵ ﺩﺍﺩﻩﻫﺎ ﺑﺎ ﺭﻭﺵ ﻣﺴﺘﺪﻝ ﻣﻨﺎﺳﺐ ﻧﻴﺴﺘﻨﺪ‪.‬‬

‫ﺗﺸﺮﻳﺢ ﺳﺎﺩﻩ ﻋﻤﻠﻜﺮﺩ ﻳﻚ ﺳﻴﺴﺘﻢ ﻋﺼﺒﻰ‬

‫ﻭﻇﻴﻔﻪ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻳﺎﺩﮔﻴﺮﻯ ﺍﺳﺖ‪ .‬ﺗﻘﺮﻳﺒﺎً ﭼﻴﺰﻯ ﺷﺒﻴﻪ ﻳﺎﺩﮔﻴﺮﻯ ﻳﻚ ﻛﻮﺩﻙ‬

‫ﺧﺮﺩﺳﺎﻝ ﻣﻰﺑﺎﺷﺪ‪ .‬ﻳﺎﺩﮔﻴﺮﻯ ﺩﺭ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﺭﺍﻳﺞ ﺑﻪ ﺻﻮﺭﺕ ﺗﺤﺖ ﻧﻈﺎﺭﺕ ﺍﺳﺖ‪.‬‬ ‫ﻭﺍﻟﺪﻳــﻦ ﺗﺼﺎﻭﻳﺮ ﺣﻴﻮﺍﻧﺎﺕ ﻣﺨﺘﻠﻒ ﺭﺍ ﺑﻪ ﻛﻮﺩﻙ ﻧﺸــﺎﻥ ﻣﻰﺩﻫﻨــﺪ ﻭ ﻧﺎﻡ ﻫﺮ ﻛﺪﺍﻡ ﺭﺍ ﺑﻪ‬ ‫ﻛــﻮﺩﻙ ﻣﻰﮔﻮﻳﻨﺪ‪ .‬ﻣﺎ ﺭﻭﻯ ﻳﻚ ﺣﻴﻮﺍﻥ‪ ،‬ﻣﺜ ً‬ ‫ﻼ ﺳــﮓ‪ ،‬ﺗﻤﺮﻛﺰ ﻣﻰﻛﻨﻴﻢ‪ .‬ﻛﻮﺩﻙ ﺗﺼﺎﻭﻳﺮ‬ ‫ﺍﻧﻮﺍﻉ ﻣﺨﺘﻠﻒ ﺳــﮓ ﺭﺍ ﻣﻰﺑﻴﻨﺪ ﻭ ﺩﺭ ﻛﻨــﺎﺭ ﺍﻃﻼﻋﺎﺕ ﻭﺭﻭﺩﻯ )ﺗﺼﺎﻭﻳﺮ ﻭ ﺻﺪﺍ( ﺑﺮﺍﻯ ﻫﺮ‬ ‫ﻧﻤﻮﻧﻪ‪ ،‬ﺑﻪ ﺍﻭ ﮔﻔﺘﻪ ﻣﻰﺷﻮﺩ ﻛﻪ ﺍﻳﻦ ﺍﻃﻼﻋﺎﺕ ﻣﺮﺑﻮﻁ ﺑﻪ ﻳﻚ ﻧﻮﻉ »ﺳﮓ« ﻫﺴﺖ ﻳﺎ ﺧﻴﺮ‪.‬‬ ‫ﺑﺪﻭﻥ ﺍﻳﻨﻜﻪ ﺑﻪ ﺍﻭ ﮔﻔﺘﻪ ﺷــﻮﺩ‪ ،‬ﺳﻴﺴــﺘﻢ ﻣﻐﺰ ﺍﻭ ﺍﻃﻼﻋــﺎﺕ ﻭﺭﻭﺩﻯ ﺭﺍ ﺗﺠﺰﻳﻪ ﻭ ﺗﺤﻠﻴﻞ‬ ‫ﻣﻰﻛﻨﺪ ﻭ ﺑﻪ ﻳﺎﻓﺘﻪﻫﺎﻳﻰ ﺩﺭ ﺯﻣﻴﻨﻪ ﻫﺮ ﻳﻚ ﺍﺯ ﭘﺎﺭﺍﻣﺘﺮﻫﺎﻯ ﻭﺭﻭﺩﻯ ﺍﺯ ﻗﺒﻴﻞ »ﺭﻧﮓ‪ ،‬ﺍﻧﺪﺍﺯﻩ‪،‬‬ ‫ﺻﺪﺍ‪ ،‬ﺩﺍﺷﺘﻦ ﭘﻨﺠﻪ ﻳﺎ ﺳﻢ ﻳﺎ ﺷﺎﺥ« ﻣﻰﺭﺳﺪ‪ .‬ﭘﺲ ﺍﺯ ﻣﺪﺗﻰ ﺍﻭ ﻗﺎﺩﺭ ﺧﻮﺍﻫﺪ ﺑﻮﺩ ﻳﻚ »ﻧﻮﻉ‬ ‫ﺟﺪﻳﺪ« ﺍﺯ ﺳﮓ ﺭﺍ ﻛﻪ ﻗﺒ ً‬ ‫ﻼ ﻫﺮﮔﺰ ﻧﺪﻳﺪﻩ ﺍﺳﺖ ﺷﻨﺎﺳﺎﻳﻰ ﻛﻨﺪ‪.‬‬ ‫ﺗﻌﺪﺍﺩ ﺳــﻠﻮﻝﻫﺎﻯ ﻻﻳﻪ ﻭﺭﻭﺩﻯ ﺑﺴــﺘﻪ ﺑﻪ ﺗﻌﺪﺍﺩ ﻭﺭﻭﺩﻯﻫﺎ ﺍﺳﺖ‪ .‬ﺩﺭ ﻋﻤﻞ ﺳﻌﻰ ﺑﺮ ﺍﻳﻦ‬ ‫ﺍﺳــﺖ ﻛﻪ ﻛﻠﻴﻪ ﭘﺎﺭﺍﻣﺘﺮﻫﺎﻳﻰ ﻛﻪ ﺩﺭ ﭘﺎﺳــﺦ ﺗﺄﺛﻴﺮ ﺩﺍﺭﻧﺪ ﺩﺭ ﻧﻈﺮ ﮔﺮﻓﺘﻪ ﺷﻮﻧﺪ‪ .‬ﺍﻟﺒﺘﻪ ﺑﺎﻳﺪ‬ ‫ﺩﺭ ﻧﻈﺮ ﮔﺮﻓﺖ ﻛﻪ ﺍﻃﻼﻋﺎﺕ ﺑﻰﺍﺳــﺘﻔﺎﺩﻩ‪ ،‬ﻭﺭﻭﺩﻯ ﻛﺎﺭ ﺷــﺒﻜﻪ ﺭﺍ ﻣﺸــﻜﻞﺗﺮ ﻣﻰﻛﻨﻨﺪ‪،‬‬ ‫ﺯﻳﺮﺍ ﺍﮔﺮ ﭼﻪ ﺷــﺒﻜﻪ ﻋﺼﺒﻰ ﺑﻪ ﻧﻮﻳﺰ‪) noise-‬ﺩﺍﺩﻩﻫﺎﻯ ﺩﺍﺭﺍﻯ ﺧﻄﺎ( ﻣﻘﺎﻭﻡ ﺍﺳــﺖ‪ ،‬ﺍﻣﺎ‬ ‫ﺩﺭ ﻫﺮ ﺻﻮﺭﺕ ﺍﮔﺮ ﻣﻴﺰﺍﻥ ﻧﻮﻳﺰ ﺑﻴﺶ ﺍﺯ ﺣﺪ ﺯﻳﺎﺩ ﺑﺎﺷــﺪ ﻣﻤﻜﻦ ﺍﺳــﺖ ﺷــﺒﻜﻪ ﻧﺘﻮﺍﻧﺪ ﺑﻪ‬ ‫ﺣﺪﺍﻗﻞﺗﺮﻳﻦ ﻣﻘﺪﺍﺭ ﺧﻄﺎ )ﺻﻔﺮ( ﺑﺮﺳﺪ‪.‬‬ ‫ﺗﻌﺪﺍﺩ ﮔﺮﻩﻫﺎ ﺩﺭ ﻻﻳﻪ ﺧﺮﻭﺟﻰ ﺑﻪ ﭘﻴﺸﮕﻮﻳﻰ ﻣﻮﺭﺩ ﻧﻈﺮ ﻣﺎ ﺑﺴﺘﮕﻰ ﺩﺍﺭﺩ‪.‬‬ ‫ﻣﺜ ً‬ ‫ﻼ ﺍﮔﺮ ﻗﺮﺍﺭ ﺍﺳــﺖ ﻛﻪ ﺷــﺒﻜﻪ ﺍﻧﺠﺎﻡ ﻛﻨﺘﺮﻝ ﻛﻴﻔﻴﺖ ﺑﺮ ﺭﻭﻯ ﻣﺤﺼﻮﻝ ﺭﺍ ﭘﻴﺸــﮕﻮﻳﻰ‬ ‫ﻛﻨﺪ‪ ،‬ﭘﺲ ﺩﺭ ﺍﺯﺍﻯ ﺩﺍﺩﻩﻫﺎﻯ ﻫﺮ ﻣﺤﺼﻮﻝ‪ ،‬ﺩﺭ ﻣﺮﺣﻠﻪ ﻳﺎﺩﮔﻴﺮﻯ ﻳﻚ ﺳﺘﻮﻥ ﺣﺎﻭﻯ ﺻﻔﺮ‬ ‫ﻳﺎ ﻳﻚ ﺑﻪ ﺷــﺒﻜﻪ ﺩﺍﺩﻩ ﻣﻰﺷــﻮﺩ‪ .‬ﺻﻔﺮ ﺑﻪ ﻣﻌﻨﺎﻯ ﻛﻨﺘﺮﻝ ﺍﻧﺠﺎﻡ ﻧﺸﺪﻩ ﻭ ﻳﻚ ﺑﻪ ﻣﻌﻨﺎﻯ‬ ‫ﻛﻨﺘﺮﻝ ﻛﻴﻔﻴﺖ ﺍﻧﺠﺎﻡ ﺷــﺪﻩ ﺧﻮﺍﻫﺪ ﺑﻮﺩ‪ .‬ﺑﻪ ﺍﻳﻦ ﺗﺮﺗﻴﺐ‪ ،‬ﻳﻚ ﺳــﻠﻮﻝ ﺩﺭ ﻻﻳﻪ ﺧﺮﻭﺟﻰ‬ ‫ﻛﺎﻓﻰ ﺧﻮﺍﻫﺪ ﺑﻮﺩ ﻛﻪ ﻓﻌﺎﻟﻴﺖ ﺁﻥ ﺑﻪ ﻣﻌﻨﻰ ﻳﻚ )ﻛﻨﺘﺮﻝ ﻛﻴﻔﻴﺖ ﺍﻧﺠﺎﻡ ﺷﺪﻩ( ﻭ ﺧﺎﻣﻮﺵ‬ ‫ﺑﻮﺩﻥ ﺁﻥ ﺑﻪ ﻣﻌﻨﻰ ﺻﻔﺮ )ﻋﺪﻡ ﻛﻨﺘﺮﻝ ﻛﻴﻔﻴﺖ( ﺧﻮﺍﻫﺪ ﺑﻮﺩ‪.‬‬

‫ﺑﺮﺧﻰ ﺍﺯ ﻛﺎﺭﺑﺮﺩﻫﺎﻯ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻣﺼﻨﻮﻋﻰ ﺩﺭ ﻣﺒﺎﺣﺚ‬ ‫ﻣﺪﻳﺮﻳﺖ ﻭ ﺻﻨﺎﻳﻊ‬

‫ ﺩﺳــﺘﻪﺑﻨﺪﻯ ﻭ ﺷﻨﺎﺳــﺎﻳﻰ ﺍﻟﮕﻮ‪ :‬ﺍﻳﻦ ﺷــﺒﻜﻪﻫﺎ ﺑﻪ ﮔﻮﻧﻪﺍﻯ ﻃﺮﺍﺣﻰ ﺷﺪﻩﺍﻧﺪ ﻛﻪ ﻗﺎﺩﺭ‬ ‫ﻫﺴﺘﻨﺪ ﺍﻧﻮﺍﻉ ﺍﻟﮕﻮﻫﺎ ﺭﺍ ﺩﺳﺘﻪﺑﻨﺪﻯ ﻭ ﺍﺯ ﻫﻢ ﺗﻔﻜﻴﻚ ﻛﻨﻨﺪ ‪.‬‬ ‫ ﺩﺳﺘﻪﺑﻨﺪﻯ ﻧﻘﺎﻁ ﺧﺎﺭﺝ ﺍﺯ ﻛﻨﺘﺮﻝ ﺩﺭ ﻛﻨﺘﺮﻝ ﻛﻴﻔﻴﺖ؛‬ ‫ )‪ (DSS‬ﺗﻔﻜﻴــﻚ ﻭ ﺩﺳــﺘﻪﺑﻨﺪﻯ ﻧﻈﺮﺍﺕ ﺧﺒﺮﮔﺎﻥ ﺍﺯ ﻋﺎﻣﻪ ﺩﺭ ﺳﻴﺴــﺘﻢ ﭘﺸــﺘﻴﺒﺎﻥ‬ ‫ﺗﺼﻤﻴﻢﮔﻴﺮﻯ؛‬ ‫ ﺩﺳﺘﻪﺑﻨﺪﻯ ﺑﻬﻴﻨﻪ ﻣﺎﺷﻴﻦﺁﻻﺕ‪.‬‬ ‫ﭘﻴﺶﺑﻴﻨﻰ‪ :‬ﺍﻳﻦ ﮔﻮﻧﻪ ﺷــﺒﻜﻪﻫﺎ ﺑﻪ ﮔﻮﻧﻪﺍﻯ ﺁﻣﻮﺯﺵ ﺩﻳﺪﻩﺍﻧﺪ ﻛﻪ ﺑﺮ ﺍﺳﺎﺱ ﻳﺎﺩﮔﻴﺮﻯ ﻭ‬ ‫ﺣﻔﻆ ﺗﺠﺎﺭﺏ‪ ،‬ﻗﺎﺩﺭ ﺑﻪ ﭘﻴﺶﺑﻴﻨﻰ ﺁﻳﻨﺪﻩ ﻫﺴﺘﻨﺪ‪.‬‬ ‫ ﺷﺒﻜﻪﻫﺎﻯ ﭘﻴﺶﺑﻴﻨﻰ ﻗﻴﻤﺖ ﻧﻔﺖ ﻭ ﺑﺎﺯﺍﺭ ﺑﻮﺭﺱ؛‬ ‫ ﺷــﺒﻜﻪﻫﺎﻯ ﭘﻴﺶﺑﻴﻨﻰﻛﻨﻨــﺪﻩ ﺩﺭ ﻣﺒﺎﺣــﺚ ﻛﻨﺘــﺮﻝ ﻣﻮﺟﻮﺩﻯ‪ ،‬ﻛﻨﺘــﺮﻝ ﻛﻴﻔﻴﺖ ﻭ‬ ‫ﺑﺮﻧﺎﻣﻪﺭﻳﺰﻯ ﺗﻌﻤﻴﺮﺍﺕ‪.‬‬ ‫ﻣﺪﻟﺴـﺎﺯﻯ‪ :‬ﺍﻳﻦ ﮔﻮﻧﻪ ﺷــﺒﻜﻪﻫﺎ ﺑﻪ ﻃﻮﺭ ﮔﺴﺘﺮﺩﻩﺍﻯ ﺩﺭ ﻣﺴــﺎﺋﻞ ﺑﺮﻧﺎﻣﻪﺭﻳﺰﻯ ﺗﻮﻟﻴﺪ ﻭ‬ ‫‪ Job-shop Schedule‬ﻭ ‪ TPS‬ﻃﺮﺍﺣﻰ ﺷﺪﻩﺍﻧﺪ ﻛﻪ ﻗﺎﺩﺭ ﺑﻪ ﺟﺴﺘﺠﻮﻯ ﻧﻘﻄﻪ ﺑﻬﻴﻨﻪ‬ ‫ﺳﺮﺍﺳﺮﻯ ﺩﺭ ﺯﻣﻴﻨﻪ ﺑﺮﻧﺎﻣﻪﺭﻳﺰﻯ ﻭ ﻏﻴﺮﻩ ﻫﺴﺘﻨﺪ‪.‬‬

‫ﻣﺸﺨﺼﻪ‬

‫ﺭﻭﺵ ﻣﺤﺎﺳﺒﺎﺗﻲ ﻣﺘﺪﺍﻭﻝ )ﺷﺎﻣﻞ ﺳﻴﺴﺘﻢﻫﺎﻱ ﺧﺒﺮﻩ(‬

‫ﺷﺒﻜﻪﻫﺎﻱ ﻋﺼﺒﻲ ﻣﺼﻨﻮﻋﻲ‬

‫ﺭﻭﺵ ﭘﺮﺩﺍﺯﺵ‬

‫ﺗﺮﺗﻴﺒﻲ‬

‫ﻣﻮﺍﺯﻱ‬

‫ﺗﻮﺍﺑﻊ‬

‫ﻣﻨﻄﻘﻲ )‪(left brained‬‬

‫ﻫﻮﺵ ﻭ ﻓﺮﺍﺳﺖ ‪(estault (right brained‬‬

‫ﺭﻭﺵ ﻓﺮﺍﮔﻴﺮﻱ‬

‫ﺑﻪ ﻛﻤﻚ ﻗﻮﺍﻋﺪ )‪(didactically‬‬

‫ﺑﺎ ﻣﺜﺎﻝ )‪(Socratically‬‬

‫ﻛﺎﺭﺑﺮﺩ‬

‫ﺣﺴﺎﺑﺪﺍﺭﻱ‪ ،‬ﻭﺍژﻩﭘﺮﺩﺍﺯﻱ‪ ،‬ﺭﻳﺎﺿﻴﺎﺕ‪ ،‬ﺍﺭﺗﺒﺎﻃﺎﺕ ﺩﻳﺠﻴﺘﺎﻝ‬

‫ﭘﺮﺩﺍﺯﺵ ﺣﺴﮕﺮﻫﺎ‪ ،‬ﺗﺸﺨﻴﺺ ﮔﻔﺘﺎﺭ‪ ،‬ﻧﻮﺷﺘﺎﺭ‪ ،‬ﺍﻟﮕﻮ‬

‫‪20‬‬ ‫ﺳﺎﻝ ﺑﻴﺴﺘﻢ ﺷﻤﺎﺭﻩ ‪127‬‬


‫ﺑﻬﻴﻨﻪﺳﺎﺯﻯ‪:‬‬ ‫ ﺩﺭ ﺳﻴﺴﺘﻢﻫﺎﻯ ﻛﻨﺘﺮﻟﻰ؛‬ ‫ ﺩﺭ ﺳﻴﺴﺘﻢﻫﺎﻯ ﻣﺪﻳﺮﻳﺖ‪ ،‬ﺗﺨﺼﻴﺺ ﻭ ﺗﺴﻬﻴﻢ ﻣﻨﺎﺑﻊ؛‬ ‫ ﺩﺭ ﺳﻴﺴﺘﻢﻫﺎﻯ ﻣﺎﻟﻰ‪ ،‬ﻣﻰﺗﻮﺍﻥ ﺍﺯ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ )ﺑﺨﺼﻮﺹ ﺷﺒﻜﻪﻫﺎﻯ ﺑﺮﮔﺸﺘﻰ(‬ ‫ﺍﺳﺘﻔﺎﺩﻩ ﻛﺮﺩ‪.‬‬

‫ﺗﻔﺎﻭﺕﻫﺎﻱ ﺷـﺒﻜﻪﻫﺎﻱ ﻋﺼﺒـﻲ ﺑﺎ ﺭﻭﺵﻫﺎﻱ ﻣﺤﺎﺳـﺒﺎﺗﻲ‬ ‫ﻣﺘﺪﺍﻭﻝ ﻭ ﺳﻴﺴﺘﻢﻫﺎﻱ ﺧﺒﺮﻩ‬

‫ﺷﺒﻜﻪﻫﺎﻱ ﻋﺼﺒﻲ ﻣﺼﻨﻮﻋﻰ ﺗﻮﺍﻥ ﺑﺎﻻﻳﻰ ﺩﺭ ﭘﺮﺩﺍﺯﺵ ﻭ ﺁﻧﺎﻟﻴﺰ ﺍﻃﻼﻋﺎﺕ ﺩﺍﺭﻧﺪ‪ .‬ﺍﻣﺎ ﻧﺒﺎﻳﺪ‬ ‫ﺗﺼﻮﺭ ﺷــﻮﺩ ﻛﻪ ﺷــﺒﻜﻪﻫﺎﻱ ﻋﺼﺒﻲ ﻣﻲﺗﻮﺍﻧﻨﺪ ﺑﺮﺍﻱ ﺣﻞ ﺗﻤﺎﻡ ﻣﺴﺎﺋﻞ ﻣﺤﺎﺳﺒﺎﺗﻲ ﻣﻮﺭﺩ‬ ‫ﺍﺳﺘﻔﺎﺩﻩ ﻭﺍﻗﻊ ﺷﻮﻧﺪ‪ .‬ﺭﻭﺵﻫﺎﻱ ﻣﺤﺎﺳﺒﺎﺗﻲ ﻣﺘﺪﺍﻭﻝ ﻫﻤﭽﻨﺎﻥ ﺑﺮﺍﻱ ﺣﻞ ﮔﺮﻭﻩ ﻣﺸﺨﺼﻲ‬ ‫ﺍﺯ ﻣﺴــﺎﺋﻞ ﻣﺎﻧﻨﺪ ﺍﻣﻮﺭ ﺣﺴﺎﺑﺪﺍﺭﻱ‪ ،‬ﺍﻧﺒﺎﺭﺩﺍﺭﻱ ﻭ ﻣﺤﺎﺳﺒﺎﺕ ﻋﺪﺩﻱ ﻣﺒﺘﻨﻲ ﺑﺮ ﻓﺮﻣﻮﻝﻫﺎﻱ‬ ‫ﻣﺸﺨﺺ‪ ،‬ﺑﻬﺘﺮﻳﻦ ﮔﺰﻳﻨﻪ ﻣﺤﺴــﻮﺏ ﻣﻲﺷﻮﻧﺪ‪ .‬ﺟﺪﻭﻝ ‪ ،1‬ﺗﻔﺎﻭﺕﻫﺎﻱ ﺑﻨﻴﺎﺩﻱ ﺩﻭ ﺭﻭﺵ‬ ‫ﻣﺤﺎﺳﺒﺎﺗﻲ ﺭﺍ ﻧﺸﺎﻥ ﻣﻲﺩﻫﺪ‪.‬‬

‫ﺍﺩﻋﺎ ﺷﺪﻩ ﺍﺳﺖ ﻛﻪ ﻓﺮﻛﺎﻧﺲ ﺁﺗﺶ )ﺷﻠﻴﻚ( ﻧﺮﻭﻥ ﻃﺒﻴﻌﻰ ﺑﻪ ﺻﻮﺭﺕ ﺗﺎﺑﻌﻰ ﺷﺒﻴﻪ ﺑﻪ ﺍﻳﻦ‬ ‫ﺗﺎﺑﻊ ﺍﺳﺖ‪ .‬ﺍﻣﺎ ﺍﺯ ﺩﻻﻳﻞ ﻋﻤﺪﻩ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺍﻳﻦ ﺗﺎﺑﻊ ﺍﻳﻦ ﺍﺳﺖ ﻛﻪ ﺗﻘﺮﻳﺒﺎً ﺧﻄﻰ‪ ،‬ﺍﻓﺰﺍﻳﺸﻰ‬ ‫ﻭ ﻣﺸــﺘﻖﭘﺬﻳﺮ ﺍﺳﺖ ﻭ ﺩﺭ ﻓﺮﻡ ﺑﺴﺘﻪ ﻗﺎﺑﻞ ﻧﻤﺎﻳﺶ ﺍﺳﺖ‪ .‬ﻣﺸﺘﻖﮔﻴﺮﻯ ﺍﺯ ﺁﻥ ﺳﺎﺩﻩ ﺍﺳﺖ‬ ‫ﻭ ﻣﺤﺪﻭﺩﻩ ﻭﺭﻭﺩﻯ )∞‪ (−∞,+‬ﺭﺍ ﺑﻪ ﺧﺮﻭﺟﻰ ] ‪ [ 0,1‬ﻓﺸﺮﺩﻩﺳﺎﺯﻯ ﻣﻰﻛﻨﺪ‪.‬‬ ‫ﻣﺮﺣﻠﻪ ‪ .3‬ﺁﻣﻮﺯﺵ ﺷﺒﻜﻪ‬ ‫ﺍﻟﮕﻮﺭﻳﺘﻢﻫﺎﻯ ﻳﺎﺩﮔﻴﺮﻯ‪ ،‬ﺭﻭﻧﺪﻫﺎﻳﻰ ﻫﺴــﺘﺪ ﻛﻪ ﺗﻮﺳــﻂ ﺁﻧﻬﺎ ﻭﺯﻥﻫﺎﻯ ﺷﺒﻜﻪ ﺗﻨﻈﻴﻢ ﻭ‬ ‫ﺗﻌﺪﻳﻞ ﻣﻰﺷــﻮﺩ‪ .‬ﻫﺪﻑ ﺍﺯ ﺁﻣﻮﺯﺵ ﺷــﺒﻜﻪ ﺍﻳﻦ ﺍﺳﺖ ﻛﻪ ﺷﺒﻜﻪ ﻗﺎﻧﻮﻥ ﻛﺎﺭ ﺭﺍ ﻳﺎﺩ ﺑﮕﻴﺮﺩ‬ ‫ﻭ ﭘﺲ ﺍﺯ ﺁﻣﻮﺯﺵ ﺑﻪ ﺍﺯﺍﻯ ﻫﺮ ﻭﺭﻭﺩﻯ‪ ،‬ﺧﺮﻭﺟﻰ ﻣﻨﺎﺳــﺐ ﺭﺍ ﺍﺭﺍﺋﻪ ﺩﻫﺪ‪ .‬ﺗﺎﻛﻨﻮﻥ ﺑﻴﺶ ﺍﺯ‬ ‫‪ 100‬ﻧﻮﻉ ﺍﻟﮕﻮﺭﻳﺘﻢ ﻳﺎﺩﮔﻴﺮﻯ ﺑﻪ ﻭﺟﻮﺩ ﺁﻣﺪﻩ ﺍﺳﺖ ﻛﻪ ﻣﻬﻢﺗﺮﻳﻦ ﺁﻧﻬﺎ ﺫﻛﺮ ﺷﺪ‪.‬‬

‫ﻣﺮﺍﺣﻞ ﻃﺮﺍﺣﻰ ﻳﻚ ﺷﺒﻜﻪ ﻋﺼﺒﻰ ﻣﺼﻨﻮﻋﻰ‬

‫ﻣﺮﺣﻠﻪ ‪ .1‬ﻃﺮﺍﺣﻰ ﻣﻌﻤﺎﺭﻯ ﺷﺒﻜﻪ‬ ‫ﺍﻳﻦ ﻣﺮﺣﻠﻪ ﺷــﺎﻣﻞ ﺗﻌﻴﻴﻦ ﺗﻌﺪﺍﺩ ﻻﻳﻪﻫﺎﻯ ﻣﻮﺟﻮﺩ ﺩﺭ ﺷﺒﻜﻪ‪ ،‬ﺗﻌﺪﺍﺩ ﻧﺮﻭﻥﻫﺎﻯ ﻫﺮ ﻻﻳﻪ‪،‬‬ ‫ﺗﻌﻴﻴﻦ ﺑﺮﮔﺸــﺖﭘﺬﻳﺮ ﺑﻮﺩﻥ ﻳﺎ ﻧﺒﻮﺩﻥ ﺷــﺒﻜﻪ ﻭ ﻏﻴﺮﻩ ﺍﺳــﺖ ﻛﻪ ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﻧﻮﻉ ﻣﺴﺌﻠﻪ‬ ‫ﺗﻌﻴﻴﻦ ﻣﻰﮔﺮﺩﺩ )ﺑﺮﺍﻯ ﻣﺜﺎﻝ ﺷــﺒﻜﻪﻫﺎﻯ ﺑﺮﮔﺸــﺘﻰ ﺩﺭ ﺍﻏﻠﺐ ﻣﻮﺍﺭﺩ ﺑﺮﺍﻯ ﻣﺴﺎﺋﻞ ﭘﻮﻳﺎ‬ ‫ﻛﺎﺭﺑﺮﺩ ﺩﺍﺭﻧﺪ ﻭ ﻳﺎ ﺍﻳﻨﻜﻪ ﺷﺒﻜﻪﻫﺎﻯ ﭘﺮﺳﭙﺘﺮﻭﻥ ﭘﻴﺶﺧﻮﺭ‪ ،‬ﺑﺮﺍﻯ ﻧﮕﺎﺷﺖﻫﺎﻯ ﻏﻴﺮﺧﻄﻰ‬ ‫ﻛﺎﺭﺑﺮﺩ ﺩﺍﺭﻧﺪ(‪.‬‬ ‫ﻧﻜﺘﻪ ﻗﺎﺑﻞ ﺗﻮﺟﻪ ﺍﻳﻨﻜﻪ ﺗﻌﺪﺍﺩ ﻧﺮﻭﻥﻫﺎﻯ ﻻﻳﻪ ﻭﺭﻭﺩﻯ ﺍﺯ ﺻﻮﺭﺕ ﻣﺴــﺌﻠﻪ ﻣﻮﺭﺩ ﺑﺮﺭﺳــﻰ‬ ‫ﻣﺸــﺨﺺ ﻣﻰﮔﺮﺩﺩ‪ .‬ﺑﻪ ﻋﺒﺎﺭﺕ ﺩﻳﮕﺮ‪ ،‬ﺗﺤﺖ ﺍﻧﺘﺨﺎﺏ ﻃﺮﺍﺡ ﻣﺴﺌﻠﻪ ﻧﻴﺴﺖ ﺑﻠﻜﻪ ﺑﺴﺘﮕﻰ‬ ‫ﺑﻪ ﺭﻭﺵ ﺣﻞ ﻣﺴــﺄﻟﻪ ﻣﻮﺭﺩ ﻧﻈﺮ ﺩﺍﺭﺩ‪ .‬ﺗﻌﺪﺍﺩ ﻧﺮﻭﻥﻫﺎﻯ ﻻﻳﻪ ﺧﺮﻭﺟﻰ ﺑﺴــﺘﮕﻰ ﺑﻪ ﻧﻮﻉ‬ ‫ﺟﻮﺍﺏ ﻣﺎ ﺩﺍﺭﺩ‪ .‬ﺑﺮﺍﻯ ﻣﺜﺎﻝ‪ ،‬ﭼﻨﺎﻧﭽﻪ ﭘﺎﺳﺦ ﻣﺎ ﺑﻪ ﺻﻮﺭﺕ ﻳﻚ ﻋﺪﺩ ﺑﺎﺷﺪ‪ ،‬ﻳﻚ ﻧﺮﻭﻥ ﻛﺎﻓﻰ‬ ‫ﺍﺳــﺖ‪ .‬ﺗﻌﺪﺍﺩ ﻻﻳﻪﻫﺎ ﻭ ﺗﻌﺪﺍﺩ ﻧﺮﻭﻥﻫﺎﻯ ﻻﻳﻪ ﭘﻨﻬﺎﻥ ﺗﻮﺳــﻂ ﻛﺎﺭﺑﺮ ﺗﻌﻴﻴﻦ ﻣﻰﮔﺮﺩﺩ؛ ﺍﻣﺎ‬ ‫ﺩﺭ ﺍﻛﺜﺮ ﻣﺴــﺎﺋﻞ ﺍﺯ ﻳﻚ ﺗﺎ ﺳــﻪ ﻻﻳﻪ ﻣﻴﺎﻧﻰ ﻛﻔﺎﻳﺖ ﻣﻰﻛﻨﺪ‪ .‬ﻫﻤﭽﻨﻴﻦ ﻳﻚ ﺭﻭﺵ ﻋﻤﻠﻰ‬ ‫ﺑﺮﺍﻯ ﺗﺨﻤﻴﻦ ﺗﻌﺪﺍﺩ ﻧﺮﻭﻥﻫﺎﻯ ﻻﻳﻪ ﭘﻨﻬﺎﻥ ﻭﺟﻮﺩ ﻧﺪﺍﺭﺩ‪ .‬ﺑﻪ ﻫﻤﻴﻦ ﺩﻟﻴﻞ ﺑﺮﺍﻯ ﺩﺳﺘﻴﺎﺑﻰ‬ ‫ﺑــﻪ ﻣﻘﺪﺍﺭ ﻣﻴﺎﻧﮕﻴﻦ ﺧﻄﺎﻯ ﻣﻄﻠﻮﺏ‪ ،‬ﺍﺯ ﺭﻭﺵﻫﺎﻯ ﺳــﻌﻰ ﻭ ﺧﻄــﺎ )ﺩﺭ ﺣﻴﻦ ﺁﻣﻮﺯﺵ(‬ ‫ﺍﺳﺘﻔﺎﺩﻩ ﻣﻰﺷﻮﺩ‪.‬‬ ‫ﻣﺮﺣﻠﻪ ‪ .2‬ﺗﻌﻴﻴﻦ ﻧﻮﻉ ﺗﺎﺑﻊ ﺗﺒﺪﻳﻞ‬ ‫ﻣﻰﺗﻮﺍﻥ ﺑﺮﺍﻯ ﺍﻳﻨﻜﻪ ﺧﺮﻭﺟﻰ ﺧﺎﺻﻰ ﺗﻮﻟﻴﺪ ﺷﻮﺩ‪ ،‬ﺍﺯ ﻳﻚ ﺗﺎﺑﻊ ﺗﺒﺪﻳﻞ ﺍﺳﺘﻔﺎﺩﻩ ﻛﺮﺩ‪ .‬ﺍﻳﻦ‬ ‫ﺗﺎﺑﻊ ﻣﺤﺪﻭﺩﻩ ﻭﺳﻴﻌﻰ ﺍﺯ ﻣﻘﺎﺩﻳﺮ ﻭﺭﻭﺩﻯ ﺭﺍ ﺑﻪ ﻣﻘﺪﺍﺭ ﺧﺎﺻﻰ ﻧﮕﺎﺷﺖ ﻣﻰﻛﻨﺪ‪ .‬ﺑﻪ ﻋﻨﻮﺍﻥ‬ ‫ﻣﺜﺎﻝ ﻣﻰﺗﻮﺍﻥ ﻫﺮ ﻣﻘﺪﺍﺭ ﺧﺮﻭﺟﻰ ﺭﺍ ﺑﻪ ﻣﻘﺪﺍﺭ ﺑﺎﻳﻨﺮﻯ ‪ 0‬ﻭ ‪ 1‬ﻧﮕﺎﺷﺖ ﻛﺮﺩ‪ .‬ﺍﻧﻮﺍﻉ ﻣﺨﺘﻠﻔﻰ‬ ‫ﺍﺯ ﺍﻳﻦ ﺗﻮﺍﺑﻊ ﺩﺭ ‪ANN‬ﻫﺎ ﻣﻮﺭﺩ ﺍﺳــﺘﻔﺎﺩﻩ ﻗﺮﺍﺭ ﻣﻰﮔﻴﺮﺩ‪ ،‬ﻭﻟﻰ ﭘﺮ ﻛﺎﺭﺑﺮﺩﺗﺮﻳﻦ ﺁﻧﻬﺎ‪ ،‬ﺗﺎﺑﻊ‬ ‫ﺗﺒﺪﻳﻞ ﺳﻴﮕﻤﻮﺋﻴﺪ )ﻣﺎﻧﻨﺪ ‪ (s‬ﺍﺳﺖ ﻛﻪ ﺑﻪ ﺻﻮﺭﺕ ﺯﻳﺮ ﺗﻌﺮﻳﻒ ﻣﻰﺷﻮﺩ‪:‬‬

‫ﺷﻜﻞ‪ (7‬ﻧﺤﻮﻩ ﻋﻤﻠﻜﺮﺩ ﺷﺒﻜﻪﻫﺎﻱ ﻋﺼﺒﻲ‬

‫ﻧﺘﻴﺠﻪ‬ ‫ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﺩﺭ ﺗﻼﺷــﻨﺪ ﺗــﺎ ﺧﺼﻮﺻﻴﺎﺕ ﺍﺳﺎﺳــﻰ ﺳــﻠﻮﻝﻫﺎﻯ ﻋﺼﺒﻰ‬ ‫ﻭ ﺍﺗﺼــﺎﻻﺕ ﺁﻧﻬــﺎ ﺑﺎ ﻳﻜﺪﻳﮕﺮ ﺭﺍ ﺷﻨﺎﺳــﺎﻳﻰ ﻛﻨﻨﺪ‪ .‬ﺳــﭙﺲ ﺑﻪ ﻃــﻮﺭ ﻣﻌﻤﻮﻝ ﻳﻚ‬ ‫ﻛﺎﻣﭙﻴﻮﺗﺮ ﺭﺍ ﺑﺮﺍﻯ ﺷﺒﻴﻪﺳــﺎﺯﻯ ﺍﻳﻦ ﺧﺼﻮﺻﻴــﺎﺕ ﺑﺮﻧﺎﻣﻪﺭﻳﺰﻯ ﻣﻰﻛﻨﻨﺪ‪ .‬ﺍﮔﺮ ﭼﻪ‬ ‫ﺑﻪ ﺩﻟﻴﻞ ﺍﻳﻨﻜﻪ ﺩﺍﻧﺶ ﻣﺎ ﺍﺯ ﺳﻠﻮﻝﻫﺎﻯ ﻋﺼﺒﻰ ﻧﺎﻗﺺ ﺍﺳﺖ ﻭ ﻗﺪﺭﺕ ﻣﺤﺎﺳﺒﺎﺕ ﻣﺎ‬ ‫ﻣﺤﺪﻭﺩ ﺍﺳــﺖ‪ ،‬ﻣﺪﻝﻫﺎﻯ ﻣﺎ ﻟﺰﻭﻣﺎً ﺁﺭﻣﺎﻥﻫﺎﻯ ﺧﺎﻡ ﻭ ﻧﺎﻗﺼﻰ ﺍﺯ ﺷﺒﻜﻪﻫﺎﻯ ﻭﺍﻗﻌﻰ‬ ‫ﺳﻠﻮﻝﻫﺎﻯ ﻋﺼﺒﻰ ﺍﺳﺖ‪ .‬ﺑﺎ ﺍﻳﻦ ﻫﻤﻪ ﺷﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻣﺼﻨﻮﻋﻰ ﺩﺭ ﺣﻞ ﻣﺴﺎﺋﻞ‬ ‫ﭘﻴﭽﻴﺪﻩ ﻛﻪ ﺩﻳﮕﺮ ﻣﺘﺪﻫﺎ ﻭ ﺭﻭﺵﻫﺎ ﻗﺎﺩﺭ ﺑﻪ ﺍﻧﺠﺎﻡ ﺁﻥ ﻧﻴﺴﺘﻨﺪ‪ ،‬ﺑﺴﻴﺎﺭ ﻣﻔﻴﺪ ﻭﺍﻗﻊ‬ ‫ﺷﺪﻩﺍﻧﺪ ﻭ ﻛﺎﺭﺑﺮﺩ ﺁﻥ ﻧﻴﺰ ﺭﻭﺯ ﺑﻪ ﺭﻭﺯ ﺩﺭ ﺣﺎﻝ ﺍﻓﺰﺍﻳﺶ ﺍﺳﺖ‪ .‬ﺑﻨﺎﺑﺮﺍﻳﻦ ﻣﺎ ﻣﻰﺗﻮﺍﻧﻴﻢ‬ ‫ﺑﺎ ﺍﺳــﺘﻔﺎﺩﻩ ﺍﺯ ﺷــﺒﻜﻪﻫﺎﻯ ﻋﺼﺒﻰ ﻫﺮ ﭼﻪ ﺑﻴﺸــﺘﺮ ﺑﻪ ﺷﺒﻴﻪﺳﺎﺯﻯ ﺍﻧﺴﺎﻥ ﺗﻮﺳﻂ‬ ‫ﻛﺎﻣﭙﻴﻮﺗﺮﻫــﺎ ﻧﺰﺩﻳﻚ ﺷــﻮﻳﻢ‪ ،‬ﺑﻪ ﻣﻨﻈﻮﺭ ﻭﺍﮔﺬﺍﺭﻯ ﻛﺎﺭﻫﺎﻯ ﺗﻜــﺮﺍﺭﻯ‪ ،‬ﻭﻗﺖﮔﻴﺮ ﻭ‬ ‫ﻣﺴﺎﺋﻠﻰ ﻛﻪ ﺑﺎ ﺗﻮﺟﻪ ﺑﻪ ﭘﻴﺸﺮﻓﺖ ﺑﺸﺮﻯ ﺩﻳﮕﺮ ﺩﺭﺧﻮﺭ ﺑﺸﺮ ﻧﻴﺴﺖ‪.‬‬

‫ﺷﻜﻞ‪ :6‬ﺗﺎﺑﻊ ﺗﺒﺪﻳﻞ‬

‫‪21‬‬ ‫ﺳﺎﻝ ﺑﻴﺴﺘﻢ ﺷﻤﺎﺭﻩ ‪127‬‬


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.