作者
Anatoli Gorchetchnikov, Massimiliano Versace, Heather Ames, Ben Chandler, Jasmin Léveillé, Gennady Livitz, Ennio Mingolla, Greg Snider, Rick Amerson, Dick Carter, Hisham Abdalla, Muhammad Shakeel Qureshi
简介
Realizing adaptive brain functions subserving per-ception, cognition, and motor behavior on biological temporal and spatial scales remains out of reach for even the fastest computers. Newly introduced memristive hardware approaches open the opportunity to implement dense, low-power synaptic memories of up to 1015 bits per square centimeter. Memristors have the unique property of “remembering” the past history of their stimulation in their resistive state and do not require power to maintain their memory, making them ideal candidates to implement large arrays of plastic synapses supporting learning in neural models. Over the past decades, many learning laws have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. To ensure an optimal implementation of a large variety of learning laws in hardware, some general and easily parameterized form of learning law must be designed. This general form learning equation would allow instantiation of multiple learning laws through different parameterizations, without rewiring the hardware. This paper characterizes a subset of local learning laws amenable to implementation in memristive hardware. The analyzed laws belong to four broad classes: Hebb rule derivatives with various methods for gating learning and decay; Threshold rule variations including the covariance and BCM families; Input reconstruction-based learning rules; and Explicit temporal trace-based rules.
学术搜索中的文章