3140 lines
100 KiB
Plaintext
3140 lines
100 KiB
Plaintext
LONG SHORT-TERM MEMORY Neural Computation 9(8):1735{1780, 1997
|
|
|
|
Sepp Hochreiter Fakultat fur Informatik Technische Universitat Munchen 80290 Munchen, Germany hochreit@informatik.tu-muenchen.de
|
|
http://www7.informatik.tu-muenchen.de/~hochreit
|
|
|
|
Jurgen Schmidhuber IDSIA
|
|
Corso Elvezia 36 6900 Lugano, Switzerland
|
|
juergen@idsia.ch
|
|
http://www.idsia.ch/~juergen
|
|
|
|
Abstract
|
|
Learning to store information over extended time intervals via recurrent backpropagation takes a very long time, mostly due to insu cient, decaying error back ow. We brie y review Hochreiter's 1991 analysis of this problem, then address it by introducing a novel, e cient, gradient-based method called \Long Short-Term Memory" (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete time steps by enforcing constant error ow through \constant error carrousels" within special units. Multiplicative gate units learn to open and close access to the constant error ow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with arti cial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with RTRL, BPTT, Recurrent Cascade-Correlation, Elman nets, and Neural Sequence Chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, arti cial long time lag tasks that have never been solved by previous recurrent network algorithms.
|
|
|
|
1 INTRODUCTION
|
|
|
|
Recurrent networks can in principle use their feedback connections to store representations of
|
|
|
|
recent input events in form of activations (\short-term memory", as opposed to \long-term mem-
|
|
|
|
ory" embodied by slowly changing weights). This is potentially signi cant for many applications,
|
|
|
|
including speech processing, non-Markovian control, and music composition (e.g., Mozer 1992).
|
|
|
|
The most widely used algorithms for learning what to put in short-term memory, however, take
|
|
|
|
too much time or do not work well at all, especially when minimal time lags between inputs and
|
|
|
|
corresponding teacher signals are long. Although theoretically fascinating, existing methods do
|
|
|
|
not provide clear practical advantages over, say, backprop in feedforward nets with limited time
|
|
|
|
windows. This paper will review an analysis of the problem and suggest a remedy.
|
|
|
|
The problem. With conventional \Back-Propagation Through Time" (BPTT, e.g., Williams
|
|
|
|
and Zipser 1992, Werbos 1988) or \Real-Time Recurrent Learning" (RTRL, e.g., Robinson and
|
|
|
|
Fallside 1987), error signals \ owing backwards in time" tend to either (1) blow up or (2) vanish:
|
|
|
|
the temporal evolution of the backpropagated error exponentially depends on the size of the
|
|
|
|
weights (Hochreiter 1991). Case (1) may lead to oscillating weights, while in case (2) learning to
|
|
|
|
bridTgehelonrgemtimedeyl.agTshtiaskpesapaepr rporheisbeintitvse\aLmonoug nSthoofrtt-imTeer,morMdeomesonryo"t
|
|
|
|
work at all (LSTM), a
|
|
|
|
(see section 3). novel recurrent
|
|
|
|
network architecture in conjunction with an appropriate gradient-based learning algorithm. LSTM
|
|
|
|
is designed to overcome these error back- ow problems. It can learn to bridge time intervals in
|
|
|
|
excess of 1000 steps even in case of noisy, incompressible input sequences, without loss of short
|
|
|
|
time lag capabilities. This is achieved by an e cient, gradient-based algorithm for an architecture
|
|
|
|
1
|
|
|
|
enforcing constant (thus neither exploding nor vanishing) error ow through internal states of special units (provided the gradient computation is truncated at certain architecture-speci c points | this does not a ect long-term error ow though).
|
|
Outline of paper. Section 2 will brie y review previous work. Section 3 begins with an outline
|
|
of the detailed analysis of vanishing errors due to Hochreiter (1991). It will then introduce a naive approach to constant error backprop for didactic purposes, and highlight its problems concerning information storage and retrieval. These problems will lead to the LSTM architecture as described in Section 4. Section 5 will present numerous experiments and comparisons with competing methods. LSTM outperforms them, and also learns to solve complex, arti cial tasks no other recurrent net algorithm has solved. Section 6 will discuss LSTM's limitations and advantages. The appendix contains a detailed description of the algorithm (A.1), and explicit error ow formulae (A.2).
|
|
|
|
2 PREVIOUS WORK
|
|
|
|
This section will focus on recurrent nets with time-varying inputs (as opposed to nets with sta-
|
|
|
|
tionary inputs and xpoint-based gradient calculations, e.g., Almeida 1987, Pineda 1987).
|
|
|
|
Gradient-descent variants. The approaches of Elman (1988), Fahlman (1991), Williams
|
|
|
|
(1989), Schmidhuber (1992a), Pearlmutter (1989), and many of the related algorithms in Pearl-
|
|
|
|
mutter's comprehensive overview (1995) su er from the same problems as BPTT and RTRL (see
|
|
|
|
Sections 1 and 3).
|
|
|
|
Time-delays. Other methods that seem practical for short time lags only are Time-Delay
|
|
|
|
Neural Networks (Lang et al. 1990) and Plate's method (Plate 1993), which updates unit activa-
|
|
|
|
tions based on a weighted sum of old activations (see also de Vries and Principe 1991). Lin et al.
|
|
|
|
(199T5i)mperocpoonsestvaanritasn.tsToofdteimalew-diethlaylonnegtwtiomrekslacgasll,eMd oNzAerR(X19n9e2t)wuosrekss.time constants in uencing
|
|
|
|
changes of unit activations (deVries and Principe's above-mentioned approach (1991) may in fact
|
|
|
|
be viewed as a mixture of TDNN and time constants). For long time lags, however, the time
|
|
|
|
constants need external ne tuning (Mozer 1992). Sun et al.'s alternative approach (1993) updates
|
|
|
|
the activation of a recurrent unit by adding the old activation and the (scaled) current net input.
|
|
|
|
The net input, however, tends to perturb the stored information, which makes long-term storage
|
|
|
|
impractical.
|
|
|
|
Ring's approach. Ring (1993) also proposed a method for bridging long time lags. Whenever
|
|
|
|
a unit in his network receives con icting error signals, he adds a higher order unit in uencing
|
|
|
|
appropriate connections. Although his approach can sometimes be extremely fast, to bridge a
|
|
|
|
time lag involving 100 steps may require the addition of 100 units. Also, Ring's net does not
|
|
|
|
geneBrealnizgeiotoeutnasele.'ns laagppdruoraatcihoness.. Bengio et al. (1994) investigate methods such as simulated
|
|
|
|
annealing, multi-grid random search, time-weighted pseudo-Newton optimization, and discrete
|
|
|
|
error propagation. Their \latch" and \2-sequence" problems are very similar to problem 3a with
|
|
|
|
minimal time lag 100 (see Experiment 3). Bengio and Frasconi (1994) also propose an EM approach
|
|
|
|
for propagating targets. With n so-called \state networks", at a given time, their system can be
|
|
|
|
in one of only n di erent states. See also beginning of Section 5. But to solve continuous problems
|
|
|
|
such as the \adding problem" (Section 5.4), their system would require an unacceptable number
|
|
|
|
of states (i.e., state networks).
|
|
|
|
Kalman lters. Puskorius and Feldkamp (1994) use Kalman lter techniques to improve
|
|
|
|
recurrent net performance. Since they use \a derivative discount factor imposed to decay expo-
|
|
|
|
nentially the e ects of past dynamic derivatives," there is no reason to believe that their Kalman
|
|
|
|
FiltSerecTornaidneodrdReercunrreetnst.
|
|
|
|
Networks will be We will see that
|
|
|
|
useful LSTM
|
|
|
|
for very long minimal time lags. uses multiplicative units (MUs) to
|
|
|
|
protect
|
|
|
|
error
|
|
|
|
ow from unwanted perturbations. It is not the rst recurrent net method using MUs though.
|
|
|
|
For instance, Watrous and Kuhn (1992) use MUs in second order nets. Some di erences to LSTM
|
|
|
|
are: (1) Watrous and Kuhn's architecture does not enforce constant error ow and is not designed
|
|
|
|
2
|
|
|
|
to solve long time lag problems. (2) It has fully connected second-order sigma-pi units, while the LSTM architecture's MUs are used only to gate access to constant error ow. (3) Watrous and Kuhn's algorithm costs O(W 2) operations per time step, ours only O(W ), where W is the number of weights. See also Miller and Giles (1993) for additional work on MUs.
|
|
Simple weight guessing. To avoid long time lag problems of gradient-based approaches we
|
|
may simply randomly initialize all network weights until the resulting net happens to classify all training sequences correctly. In fact, recently we discovered (Schmidhuber and Hochreiter 1996, Hochreiter and Schmidhuber 1996, 1997) that simple weight guessing solves many of the problems in (Bengio 1994, Bengio and Frasconi 1994, Miller and Giles 1993, Lin et al. 1995) faster than the algorithms proposed therein. This does not mean that weight guessing is a good algorithm. It just means that the problems are very simple. More realistic tasks require either many free parameters (e.g., input weights) or high weight precision (e.g., for continuous-valued parameters), such that guessing becomes completely infeasible.
|
|
Adaptive sequence chunkers. Schmidhuber's hierarchical chunker systems (1992b, 1993)
|
|
do have a capability to bridge arbitrary time lags, but only if there is local predictability across the subsequences causing the time lags (see also Mozer 1992). For instance, in his postdoctoral thesis (1993), Schmidhuber uses hierarchical recurrent nets to rapidly solve certain grammar learning tasks involving minimal time lags in excess of 1000 steps. The performance of chunker systems, however, deteriorates as the noise level increases and the input sequences become less compressible. LSTM does not su er from this problem.
|
|
|
|
3 CONSTANT ERROR BACKPROP
|
|
|
|
3.1 EXPONENTIALLY DECAYING ERROR
|
|
|
|
Conventional BPTT (e.g. Williams and Zipser 1992). Output unit k's target at time t is
|
|
denoted by dk(t). Using mean squared error, k's error signal is
|
|
|
|
#k(t) = fk0 (netk(t))(dk(t) yk(t));
|
|
|
|
where
|
|
|
|
yi(t) = fi(neti(t))
|
|
|
|
is
|
|
|
|
the
|
|
|
|
activation
|
|
|
|
of
|
|
|
|
a
|
|
|
|
non-input
|
|
|
|
unit i with neti(t)
|
|
|
|
di erentiable
|
|
= X wijyj(t
|
|
|
|
activation 1)
|
|
|
|
function
|
|
|
|
fi,
|
|
|
|
j
|
|
|
|
insonu-noiut tip'sutcuurnrietnjt'snbeat ciknppruotp,aagnadtedwiejrriosr
|
|
|
|
the weight signal is
|
|
|
|
on
|
|
|
|
the
|
|
|
|
connection
|
|
|
|
from
|
|
|
|
unit
|
|
|
|
j
|
|
|
|
to i.
|
|
|
|
Some
|
|
|
|
#j(t) = fj0(netj(t)) X wij#i(t + 1):
|
|
|
|
i
|
|
|
|
The corresponding learning rate, and l
|
|
|
|
contribution to wjl's total weight update is stands for an arbitrary unit connected to unit
|
|
|
|
j#.j(t)yl(t
|
|
|
|
1), where
|
|
|
|
is the
|
|
|
|
Outline of Hochreiter's analysis (1991, page 19-21). Suppose we have a fully connected
|
|
|
|
net whose non-input unit indices range from 1 to n. Let us focus on local error ow from unit u
|
|
|
|
to unit v (later we will see that the analysis immediately extends to global error ow). The error
|
|
|
|
occurring at an arbitrary unit u at time step t is propagated \back into time" for q time steps, to
|
|
|
|
an arbitrary unit v. This will scale the error by the following factor:
|
|
|
|
@#v(t q) @#u(t)
|
|
|
|
=
|
|
|
|
(
|
|
|
|
fv0 (netv(ftv0 (nqe)t)vP(t nl=11)@)#w@l(u#tvu(qt+) 1) wlv
|
|
|
|
q=1 q>1
|
|
|
|
:
|
|
|
|
(1)
|
|
|
|
3
|
|
|
|
With lq = v and l0 = u, we obtain:
|
|
|
|
@#v(t q) @#u(t)
|
|
|
|
=
|
|
|
|
Xn
|
|
l1=1
|
|
|
|
:
|
|
|
|
:
|
|
|
|
:
|
|
|
|
Xn
|
|
lq 1=1
|
|
|
|
Yq
|
|
m=1
|
|
|
|
fl0m (netlm (t
|
|
|
|
m))wlmlm 1
|
|
|
|
(2)
|
|
|
|
(proof by induction). total error back ow
|
|
|
|
The sum of the nq 1 (note that since the
|
|
|
|
stuemrmmsaQtioqmn=1tefrl0mm (snmetalmy
|
|
|
|
(htavemd))i welrmelnmt
|
|
|
|
1sigdnetse, rimncinreeasstinhge
|
|
|
|
the number of units n does not necessarily increase error ow).
|
|
|
|
Intuitive explanation of equation (2). If
|
|
|
|
jfl0m(netlm(t m))wlmlm 1j > 1:0
|
|
|
|
for all m (as can with q. That is,
|
|
|
|
happen, e.g., with linear the error blows up, and
|
|
|
|
cfolmn
|
|
|
|
) then icting
|
|
|
|
the largest product increases error signals arriving at unit
|
|
|
|
exponentially v can lead to
|
|
|
|
oscillating weights and unstable learning (for error blow-ups or bifurcations see also Pineda 1988,
|
|
|
|
Baldi and Pineda 1991, Doya 1992). On the other hand, if
|
|
|
|
jfl0m(netlm(t m))wlmlm 1j < 1:0
|
|
|
|
for all m, then the largest product decreases exponentially with q. That is, the error vanishes, and
|
|
|
|
nothing can be learned in acceptable time.
|
|
andIfnfoltmeiqsutahletloogziesrtoic, tshigemnojfidl0mfu(nnecttlimon)w, tlmhelmn
|
|
|
|
the maximal value of
|
|
1j takes on maximal
|
|
|
|
fvla0mluiess0w.2h5e. rIef
|
|
|
|
ylm
|
|
|
|
1
|
|
|
|
is
|
|
|
|
constant
|
|
|
|
wlmlm
|
|
|
|
1
|
|
|
|
=
|
|
|
|
1 ylm
|
|
|
|
1
|
|
|
|
coth(
|
|
|
|
1 2
|
|
|
|
netlm
|
|
|
|
);
|
|
|
|
goes imal
|
|
|
|
to zero weight
|
|
|
|
vfoarlujewlwmmlmax1ijs!sm1al,learntdhiasnle4s.s0)t.haHnen1:c0e
|
|
|
|
fworitjhwclmolnmve1njti<on4a:0l
|
|
|
|
(e.g., if logistic
|
|
|
|
the absolute maxsigmoid activation
|
|
|
|
functions, the error ow tends to vanish as long as the weights have absolute values below 4.0,
|
|
|
|
especially in the beginning of the training phase. In general the use of larger initial weights will
|
|
|
|
not help than the
|
|
|
|
tahbosuogluhte|waesigshetencaanbgorvoew, fo(arlsjwo,lmsolmme1
|
|
|
|
j ! 1 the relevant derivative goes to zero
|
|
weights will have to change their signs by
|
|
|
|
\faster" crossing
|
|
|
|
zero). Likewise, increasing the learning rate does not help either | it will not change the ratio of
|
|
|
|
long-range error ow and short-range error ow. BPTT is too sensitive to recent distractions. (A
|
|
|
|
very similar, more recent analysis was presented by Bengio et al. 1994).
|
|
|
|
Global error ow. The local error ow analysis above immediately shows that global error
|
|
|
|
ow vanishes, too. To see this, compute
|
|
|
|
u:
|
|
|
|
u
|
|
|
|
X
|
|
output unit
|
|
|
|
@#v(t q) @#u(t)
|
|
|
|
:
|
|
|
|
Weak upper bound for scaling factor. The following, slightly extended vanishing error
|
|
|
|
analysis also takes n, the number of units, into account. For q > 1, formula (2) can be rewritten
|
|
|
|
as
|
|
|
|
(WuT )T F 0(t 1) qY1 (W F 0(t m)) Wv fv0(netv(t q));
|
|
|
|
m=2
|
|
|
|
where the weight matrix W is de ned by W ]ij := wij, v's outgoing weight vector Wv is de ned by miWAf i]=vij]6=i1i:s;=j:t,:h:aWe;nqed],ilveFmF=0(e0t(nwttivmi,n)muti's)hs]etiijnhice-:o=tdhmiaficngio0o(glnunwematelinim(gthaatntrdvimexjc)-ot)tofhroWtrrhostuweTrowroidsfisedmer.eadteHnrreieixdvraeAbtyTi,veaWisnsddutTeh]xein]i:et=rdiasanWtssh:pe]ouFsii-i0=tt(ihtowncumooim,)p]apeijnroadn:=tefonor0tr, of vector x.
|
|
|
|
4
|
|
|
|
Using a matrix norm k : kA compatible with vector norm k : kx, we de ne fm0 ax := maxm=1;:::;qfk F 0(t m) kAg:
|
|
For maxi=1;:::;nfjxijg k x kx we get jxT yj n k x kx k y kx : Since jfv0(netv(t q))j k F 0(t q) kA fm0 ax;
|
|
|
|
we obtain the following inequality:
|
|
|
|
j
|
|
|
|
@#v(t q) @#u(t)
|
|
|
|
j
|
|
|
|
n (fm0 ax)q k Wv kx k WuT kx k W kqA 2 n (fm0 ax k W kA)q :
|
|
|
|
This inequality results from
|
|
|
|
k Wv kx = k W ev kx k W kA k ev kx k W kA
|
|
|
|
and
|
|
k WuT kx = k euW kx k W kA k eu kx k W kA;
|
|
|
|
where ek is the unit vector whose components are Note that this is a weak, extreme case upper bound
|
|
|
|
0 except | it will
|
|
|
|
for be
|
|
|
|
the k-th component, reached only if all k F
|
|
|
|
0w(thichmi)sk1A.
|
|
|
|
take on maximal values, and if the contributions of all paths across which error ows back from
|
|
|
|
uknFit0(ut
|
|
|
|
to unit v m) kA,
|
|
|
|
have the same sign. Large k W kA, however, typically result
|
|
as con rmed by experiments (see, e.g., Hochreiter 1991).
|
|
|
|
in
|
|
|
|
small
|
|
|
|
values
|
|
|
|
of
|
|
|
|
For example, with norms
|
|
|
|
k W kA := maxr X jwrsj
|
|
|
|
s
|
|
|
|
and
|
|
k x kx:= maxrjxrj; we have fm0 ax = 0:25 for the logistic sigmoid. We observe that if
|
|
|
|
jwijj
|
|
|
|
wmax
|
|
|
|
<
|
|
|
|
4:0 n
|
|
|
|
8i; j;
|
|
|
|
then k W kA
|
|
we obtain
|
|
|
|
nwmax < 4:0 will result in exponential decay | by setting
|
|
|
|
j
|
|
|
|
@#v(t q) @#u(t)
|
|
|
|
j
|
|
|
|
n ( )q :
|
|
|
|
:= nw4m:0ax < 1:0,
|
|
|
|
We refer to Hochreiter's 1991 thesis for additional results.
|
|
|
|
3.2 CONSTANT ERROR FLOW: NAIVE APPROACH
|
|
|
|
A single unit. To avoid vanishing error signals, how can we achieve constant error ow through
|
|
|
|
a single unit error back require
|
|
|
|
ojwwiisth#ja(ts)in=glefjc0(onnentejc(tti)o)n#jt(ot
|
|
|
|
itself? According to the rules + 1)wjj. To enforce constant
|
|
|
|
fj0(netj(t))wjj = 1:0:
|
|
|
|
above, error
|
|
|
|
at time t, j's local ow through j, we
|
|
|
|
Note the similarity to Mozer's xed time constant system (1992) | a time constant of 1:0 is
|
|
|
|
appropriate for potentially in nite time lags1.
|
|
|
|
The constant error carrousel. Integrating the di erential equation above, we obtain
|
|
|
|
fj(netj(t)) vation has
|
|
|
|
= to
|
|
|
|
renmewtjjaj(itn)
|
|
|
|
for arbitrary constant:
|
|
|
|
netj(t).
|
|
|
|
This means:
|
|
|
|
fj
|
|
|
|
has to be linear,
|
|
|
|
and
|
|
|
|
unit
|
|
|
|
j's acti-
|
|
|
|
yj(t + 1) = fj(netj(t + 1)) = fj(wjjyj(t)) = yj(t):
|
|
|
|
1We do not use the expression \time constant" in the di erential sense, as, e.g., Pearlmutter (1995).
|
|
|
|
5
|
|
|
|
In the experiments, this will be ensured by using the identity function fj : fj(x) = x; 8x, and by
|
|
|
|
setting wjj = 1:0. We refer to this as the constant error carrousel (CEC). CEC will be LSTM's
|
|
|
|
central feature (see Section 4).
|
|
|
|
Of course unit j will not only be connected to itself but also to other units. This invokes two
|
|
|
|
obvious, related problems (also inherent in all other gradient-based approaches):
|
|
|
|
1. Input
|
|
Assume that
|
|
|
|
weight con ict:
|
|
the total error can
|
|
|
|
for be
|
|
|
|
simplicity, let us focus on a single reduced by switching on unit j in
|
|
|
|
additional input weight wji. response to a certain input,
|
|
|
|
and keeping it active for a long time (until it helps to compute a desired output). Provided i is non-
|
|
|
|
zero, since the same incoming weight has to be used for both storing certain inputs and ignoring
|
|
|
|
loointnheejar)rs),a: nwtdhjie(s2we)islpligronoftateelscntwirneiglcletaihvteteemicnoppntutitco(tbimnygapkwreeewvigejhnittpiunapgrtdjiactifperaostmiegnibnaeli(sn1gd) ussrtwionirtgicnhtgehditshoetiminbepyu(tirre(rcbealyellvsatwhniatttclhajitneigsr
|
|
|
|
inputs). This con ict makes learning di cult, and calls for a more context-sensitive mechanism
|
|
|
|
for controlling \write operations" through input weights.
|
|
|
|
2. Output weight con ict: assume j is switched on and currently stores some previous
|
|
|
|
input. to be
|
|
|
|
For used
|
|
|
|
sfiomrpblioctithy,relettrieuvsinfogcujs'socnonatseinntglaetacdedrittaiionnatlimouestgaonindgpwreevigehnttiwngkjj.
|
|
|
|
Tfrhoemsadmisteuwrbkijnghaks
|
|
|
|
at other times. As long as unit j is non-zero, wkj will attract con generated during sequence processing: these signals will attempt to accessing the information stored in j and | at di erent times | (2)
|
|
|
|
icting weight update signals
|
|
|
|
make wkj protecting
|
|
|
|
participate unit k from
|
|
|
|
in (1) being
|
|
|
|
perturbed by j. For instance, with many tasks there are certain \short time lag errors" that can be
|
|
|
|
reduced in early training stages. However, at later training stages j may suddenly start to cause
|
|
|
|
avoidable errors in situations that already seemed under control by attempting to participate in
|
|
|
|
reducing more di cult \long time lag errors". Again, this con ict makes learning di cult, and
|
|
|
|
calls for a more context-sensitive mechanism for controlling \read operations" through output
|
|
|
|
weights.
|
|
|
|
Of course, input and output weight con icts are not speci c for long time lags, but occur for
|
|
|
|
short time lags as well. Their e ects, however, become particularly pronounced in the long time
|
|
|
|
lag case: as the time lag increases, (1) stored information must be protected against perturbation
|
|
|
|
for longer and longer periods, and | especially in advanced stages of learning | (2) more and
|
|
|
|
more already correct outputs also require protection against perturbation.
|
|
|
|
Due to the problems above the naive approach does not work well except in case of certain
|
|
|
|
simple problems involving local input/output representations and non-repeating input patterns
|
|
|
|
(see Hochreiter 1991 and Silva et al. 1996). The next section shows how to do it right.
|
|
|
|
4 LONG SHORT-TERM MEMORY
|
|
|
|
Memory cells and gate units. To construct an architecture that allows for constant error ow
|
|
|
|
through special, self-connected units without the disadvantages of the naive approach, we extend
|
|
|
|
the constant error carrousel CEC embodied by the self-connected, linear unit j from Section 3.2
|
|
|
|
by introducing additional features. A multiplicative input gate unit is introduced to protect the
|
|
|
|
memory contents stored in j from perturbation by irrelevant inputs. Likewise, a multiplicative
|
|
|
|
output gate unit is introduced which protects other units from perturbation by currently irrelevant
|
|
|
|
memory contents stored in j.
|
|
|
|
The resulting, more complex unit is called a memory cell (see Figure 1). The j-th memory cell
|
|
|
|
ba(istnyhddeyeifnCnroojEt(meCt)d),a.cnoIjuon.tthjaE'edsardcbmihytiumoylntoeiumpttojloi(cnrtay)et.ticcvWeje,llecuijhns aigbtveuetiisnltjina(ptruhotue nf\rdionmapucatenmgtaurtaletli"pl)il.niceiaantrji'vusenauictntwiivtiatothuiotajn
|
|
|
|
xed self-connection (the \output gate"), at time t is denoted
|
|
|
|
youtj (t) = foutj (netoutj (t)); yinj (t) = finj (netinj (t));
|
|
|
|
6
|
|
|
|
where
|
|
|
|
netoutj(t) = X woutjuyu(t 1);
|
|
|
|
u
|
|
|
|
and
|
|
|
|
netinj(t) = X winjuyu(t 1):
|
|
|
|
u
|
|
|
|
We also have
|
|
|
|
netcj(t) = X wcjuyu(t 1):
|
|
u
|
|
|
|
The summation indices u may stand for input units, gate units, memory cells, or even conventional
|
|
|
|
hidden units if there are any (see also paragraph on \network topology" below). All these di erent
|
|
|
|
types of units may convey useful information about the current state of the net. For instance,
|
|
|
|
an input gate (output gate) may use inputs from other memory cells to decide whether to store
|
|
|
|
(access) certain information in its memory cell. There even may be recurrent self-connections like
|
|
|
|
wcjcAj t.
|
|
|
|
It is time
|
|
|
|
up to the user to t, cj's output ycj
|
|
|
|
de (t)
|
|
|
|
ne the network is computed as
|
|
|
|
topology.
|
|
|
|
See
|
|
|
|
Figure
|
|
|
|
2
|
|
|
|
for
|
|
|
|
an
|
|
|
|
example.
|
|
|
|
ycj(t) = youtj(t)h(scj (t));
|
|
|
|
where the \internal state" scj(t) is
|
|
|
|
scj(0) = 0; scj(t) = scj(t 1) + yinj(t)g netcj(t) for t > 0:
|
|
|
|
The di erentiable outputs computed
|
|
|
|
function from the
|
|
|
|
igntseqrnuaalshsetas tenestccjj.;
|
|
|
|
the
|
|
|
|
di
|
|
|
|
erentiable
|
|
|
|
function
|
|
|
|
h
|
|
|
|
scales
|
|
|
|
memory
|
|
|
|
cell
|
|
|
|
net c j g
|
|
|
|
sc
|
|
|
|
=sc +
|
|
|
|
g
|
|
|
|
y in j
|
|
|
|
j
|
|
|
|
j
|
|
|
|
g yin j
|
|
|
|
1.0
|
|
|
|
h
|
|
|
|
yc j h y out
|
|
j
|
|
|
|
wc i j
|
|
|
|
y in j
|
|
win i j
|
|
|
|
net inj
|
|
|
|
y out j
|
|
wout i j
|
|
|
|
wic j net outj
|
|
|
|
Figure 1: Architecture of memory cell cj (the box) and its gate units inj; outj. The self-recurrent connection (with weight 1.0) indicates feedback with a delay of 1 time step. It builds the basis of the \constant error carrousel" CEC. The gate units open and close access to CEC. See text and appendix A.1 for details.
|
|
|
|
Why gate units? To avoid input weight con icts, inj controls the error ow to memory cell
|
|
|
|
cj 's ow
|
|
|
|
input from
|
|
|
|
ucnointnje'cstoiountspuwtccjio.nnTeoctcioirncsu.mInveontthecjr'swoorudtsp,utthewneiegthctacnounseicitns,j
|
|
|
|
outj controls the to decide when to
|
|
|
|
error keep
|
|
|
|
owrhoevnetroridpereivnefonrtmoathtieornuinnitms efrmomorybecienllgcpj,eratnudrboeudtjbytocjde(csiedeeFwighuerne
|
|
|
|
to access 1).
|
|
|
|
memory
|
|
|
|
cell
|
|
|
|
cj
|
|
|
|
and
|
|
|
|
Error signals trapped within a memory cell's CEC cannot change { but di erent error signals
|
|
|
|
owing into the cell (at di erent times) via its output gate may get superimposed. The output
|
|
|
|
gate will have to learn which errors to trap in its CEC, by appropriately scaling them. The input
|
|
|
|
7
|
|
|
|
gate will have to learn when to release errors, again by appropriately scaling them. Essentially,
|
|
|
|
the multiplicative gate units open and close access to constant error ow through CEC.
|
|
|
|
Distributed output representations typically do require output gates. Not always are both
|
|
|
|
gate types necessary, though | one may be su cient. For instance, in Experiments 2a and 2b in
|
|
|
|
Section 5, it will be possible to use input gates only. In fact, output gates are not required in case
|
|
|
|
of local output encoding | preventing memory cells from perturbing already learned outputs can
|
|
|
|
be done by simply setting the corresponding weights to zero. Even in this case, however, output
|
|
|
|
gates can be bene cial: they prevent the net's attempts at storing long time lag memories (which
|
|
|
|
are usually hard to learn) from perturbing activations representing easily learnable short time lag
|
|
|
|
memories. (This will prove quite useful in Experiment 1, for instance.)
|
|
|
|
Network topology. We use networks with one input layer, one hidden layer, and one output
|
|
|
|
layer. The (fully) self-connected hidden layer contains memory cells and corresponding gate units
|
|
|
|
(for convenience, we refer to both memory cells and gate units as being located in the hidden
|
|
|
|
layer). The hidden layer may also contain \conventional" hidden units providing inputs to gate
|
|
|
|
units and memory cells. All units (except for gate units) in all layers have directed connections
|
|
|
|
(serve as inputs) to all units in the layer above (or to all higher layers { Experiments 2a and 2b).
|
|
|
|
Memory cell blocks. S memory cells sharing the same input gate and the same output gate
|
|
|
|
form a structure called a \memory cell block of size S". Memory cell blocks facilitate information
|
|
|
|
storage | as with conventional neural nets, it is not so easy to code a distributed input within a
|
|
|
|
single cell. Since each memory cell block has as many gate units as a single memory cell (namely
|
|
|
|
two), the block architecture can be even slightly more e cient (see paragraph \computational
|
|
|
|
complexity"). A memory cell block of size 1 is just a simple memory cell. In the experiments
|
|
|
|
(Section 5), we will use memory cell blocks of various sizes.
|
|
|
|
Learning. We use a variant of RTRL (e.g., Robinson and Fallside 1987) which properly takes
|
|
|
|
into account the altered, multiplicative dynamics caused by input and output gates. However, to
|
|
|
|
ensure non-decaying error backprop through internal states of memory cells, as with truncated
|
|
|
|
BitnoPcclTuhdTaen(sgeen.get.ht,ceWj,innilcleiotamimnjisn, gannewdteoiPugtehjn)tgsd)o1. 9On9o0nt)ly,geewrtriptohrrosinpa2argrmiavetimendgobrayatcc\kemlfluesm,rteohrreryroricsnelatlrinmeepetr(ionappltauhgtoasut"eg(dhfobtrhaceceyklldthcojr,soeturhgvihes
|
|
|
|
previous internal states it gets scaled by output
|
|
|
|
sgcajt.eTaoctviivsautaiolinzeatnhdish: 0o. nTcheeannietrirsowr sitighninaltahrerimveesmaotray
|
|
|
|
memory cell output, cell's CEC, where it
|
|
|
|
can the
|
|
|
|
ow back inde nitely input gate and g, it
|
|
|
|
without is scaled
|
|
|
|
ever being scaled. Only when it leaves the once more by input gate activation and
|
|
|
|
gm0.emItortyhecnellsethrvreosugtho
|
|
|
|
change the incoming weights before it is truncated (see appendix for explicit formulae).
|
|
|
|
Computational complexity. As with Mozer's focused recurrent backprop algorithm (Mozer
|
|
|
|
1989), very e
|
|
|
|
ocnileyntt,hwe itdheraivnaetixvceeslle@@nswtcijlupndeeadtetcoobmeplsetxoirteydoaf nOd(Wup)d,awtehde.reHWentcheetnhuemLbSeTrMof
|
|
|
|
algorithm is weights (see
|
|
|
|
details in appendix A.1). Hence, LSTM and BPTT for fully recurrent nets have the same update
|
|
|
|
complexity per time step (while RTRL's is much worse). Unlike full BPTT, however, LSTM is
|
|
|
|
local in space and time3: there is no need to store activation values observed during sequence
|
|
|
|
processing in a stack with potentially unlimited size.
|
|
|
|
Abuse problem and solutions. In the beginning of the learning phase, error reduction
|
|
|
|
may be possible without storing information over time. The network will thus tend to abuse
|
|
|
|
memory cells, e.g., as bias cells (i.e., it might make their activations constant and use the outgoing
|
|
|
|
connections as adaptive thresholds for other units). The potential di culty is: it may take a
|
|
|
|
long time to release abused memory cells and make them available for further learning. A similar
|
|
|
|
\abuse problem" appears if two memory cells store the same (redundant) information. There are
|
|
|
|
at least two solutions to the abuse problem: (1) Sequential network construction (e.g., Fahlman
|
|
|
|
1991): a memory cell and the corresponding gate units are added to the network whenever the
|
|
|
|
2For intra-cellular backprop in a quite di erent context see also Doya and Yoshizawa (1989). 3Following Schmidhuber (1989), we say that a recurrent net algorithm is local in space if the update complexity per time step and weight does not depend on network size. We say that a method is local in time if its storage requirements do not depend on input sequence length. For instance, RTRL is local in time but not in space. BPTT is local in space but not in time.
|
|
|
|
8
|
|
|
|
output
|
|
|
|
out 1
|
|
cell 1 cell 2
|
|
|
|
block block
|
|
|
|
1
|
|
|
|
1
|
|
|
|
in 1
|
|
|
|
out 2 cell 1 cell 2
|
|
|
|
block block
|
|
|
|
2
|
|
|
|
2
|
|
|
|
in 2
|
|
|
|
hidden
|
|
|
|
input
|
|
Figure 2: Example of a net with 8 input units, 4 output units, and 2 memory cell blocks of size 2. in1 marks the input gate, out1 marks the output gate, and cell1=block1 marks the rst memory cell of block 1. cell1=block1's architecture is identical to the one in Figure 1, with gate units in1 and out1 (note that by rotating Figure 1 by 90 degrees anti-clockwise, it will match with the corresponding parts of Figure 1). The example assumes dense connectivity: each gate unit and each memory cell see all non-output units. For simplicity, however, outgoing weights of only one type of unit are shown for each layer. With the e cient, truncated update rule, error ows only through connections to output units, and through xed self-connections within cell blocks (not shown here | see Figure 1). Error ow is truncated once it \wants" to leave memory cells or gate units. Therefore, no connection shown above serves to propagate error back to the unit from which the connection originates (except for connections to output units), although the connections themselves are modi able. That's why the truncated LSTM algorithm is so e cient, despite its ability to bridge very long time lags. See text and appendix A.1 for details. Figure 2 actually shows the architecture used for Experiment 6a | only the bias of the non-input units is omitted.
|
|
|
|
error stops decreasing (see Experiment 2 in Section 5). (2) Output gate bias: each output gate gets
|
|
|
|
a negative initial bias, to push initial memory cell activations towards zero. Memory cells with
|
|
|
|
more negative bias automatically get \allocated" later (see Experiments 1, 3, 4, 5, 6 in Section 5).
|
|
|
|
Internal state drift and remedies. If memory cell cj's inputs are mostly positive or mostly
|
|
|
|
nfoergathtieveh,0t(hsej)n
|
|
|
|
its internal state sj will tend to drift away over time. will then adopt very small values, and the gradient
|
|
|
|
This will
|
|
|
|
is potentially vanish. One
|
|
|
|
dangerous, way to cir-
|
|
|
|
cumvent this problem is to choose an appropriate function h. But h(x) = x, for instance, has the
|
|
|
|
disadvantage of unrestricted memory cell output range. Our simple but e ective way of solving
|
|
|
|
dAfoifr0lnittfjhthooepunrdgotrhhbifeltteihomnetgrsheeearit,setctahht.eetrpWabodeitgetehoinntnlioaibgnleigsntwteoigcefaeslnteiigvatmerhneoeiindmegcaatigcstonitfivtoaiuntidpinoeuisnttiaofgulfalnyhtce0tb(bisioaijan)ssso,tihnstehnteehirngeeplioaugnpitbepgleehaaatcreonsmditnopajanbrtdeeodwnooftaorynditenshejedzeaofrnnooder.
|
|
|
|
ne-tuning the initial bias, as con rmed by Experiments 4 and 5 in Section 5.4.
|
|
|
|
5 EXPERIMENTS
|
|
Introduction. Which tasks are appropriate to demonstrate the quality of a novel long time lag
|
|
|
|
9
|
|
|
|
algorithm? First of all, minimal time lags between relevant input signals and corresponding teacher signals must be long for all training sequences. In fact, many previous recurrent net algorithms sometimes manage to generalize from very short training sequences to very long test sequences. See, e.g., Pollack (1991). But a real long time lag problem does not have any short time lag exemplars in the training set. For instance, Elman's training procedure, BPTT, o ine RTRL, online RTRL, etc., fail miserably on real long time lag problems. See, e.g., Hochreiter (1991) and Mozer (1992). A second important requirement is that the tasks should be complex enough such that they cannot be solved quickly by simple-minded strategies such as random weight guessing.
|
|
Guessing can outperform many long time lag algorithms. Recently we discovered
|
|
(Schmidhuber and Hochreiter 1996, Hochreiter and Schmidhuber 1996, 1997) that many long time lag tasks used in previous work can be solved more quickly by simple random weight guessing than by the proposed algorithms. For instance, guessing solved a variant of Bengio and Frasconi's \parity problem" (1994) problem much faster4 than the seven methods tested by Bengio et al. (1994) and Bengio and Frasconi (1994). Similarly for some of Miller and Giles' problems (1993). Of course, this does not mean that guessing is a good algorithm. It just means that some previously used problems are not extremely appropriate to demonstrate the quality of previously proposed algorithms.
|
|
What's common to Experiments 1{6. All our experiments (except for Experiment 1)
|
|
involve long minimal time lags | there are no short time lag training exemplars facilitating learning. Solutions to most of our tasks are sparse in weight space. They require either many parameters/inputs or high weight precision, such that random weight guessing becomes infeasible.
|
|
We always use on-line learning (as opposed to batch learning), and logistic sigmoids as activation functions. For Experiments 1 and 2, initial weights are chosen in the range 0:2; 0:2], for the other experiments in 0:1; 0:1]. Training sequences are generated randomly according to the various task descriptions. In slight deviation from the notation in Appendix A1, each discrete time step of each input sequence involves three processing steps: (1) use current input to set the input units. (2) Compute activations of hidden units (including input gates, output gates, memory cells). (3) Compute output unit activations. Except for Experiments 1, 2a, and 2b, sequence elements are randomly generated on-line, and error signals are generated only at sequence ends. Net activations are reset after each processed input sequence.
|
|
For comparisons with recurrent nets taught by gradient descent, we give results only for RTRL, except for comparison 2a, which also includes BPTT. Note, however, that untruncated BPTT (see, e.g., Williams and Peng 1990) computes exactly the same gradient as o ine RTRL. With long time lag problems, o ine RTRL (or BPTT) and the online version of RTRL (no activation resets, online weight changes) lead to almost identical, negative results (as con rmed by additional simulations in Hochreiter 1991; see also Mozer 1992). This is because o ine RTRL, online RTRL, and full BPTT all su er badly from exponential error decay.
|
|
Our LSTM architectures are selected quite arbitrarily. If nothing is known about the complexity of a given problem, a more systematic approach would be: start with a very small net consisting of one memory cell. If this does not work, try two cells, etc. Alternatively, use sequential network construction (e.g., Fahlman 1991).
|
|
Outline of experiments.
|
|
Experiment 1 focuses on a standard benchmark test for recurrent nets: the embedded Reber grammar. Since it allows for training sequences with short time lags, it is not a long time lag problem. We include it because (1) it provides a nice example where LSTM's output gates are truly bene cial, and (2) it is a popular benchmark for recurrent nets that has been used by many authors | we want to include at least one experiment where conventional BPTT and RTRL do not fail completely (LSTM, however, clearly outperforms them). The embedded Reber grammar's minimal time lags represent a border case in the sense that it is still possible to learn to bridge them with conventional algorithms. Only slightly longer
|
|
4It should be mentioned, however, that di erent input representations and di erent types of noise may lead to worse guessing performance (Yoshua Bengio, personal communication, 1996).
|
|
10
|
|
|
|
minimal time lags would make this almost impossible. The more interesting tasks in our paper, however, are those that RTRL, BPTT, etc. cannot solve at all. Experiment 2 focuses on noise-free and noisy sequences involving numerous input symbols distracting from the few important ones. The most di cult task (Task 2c) involves hundreds of distractor symbols at random positions, and minimal time lags of 1000 steps. LSTM solves it, while BPTT and RTRL already fail in case of 10-step minimal time lags (see also, e.g., Hochreiter 1991 and Mozer 1992). For this reason RTRL and BPTT are omitted in the remaining, more complex experiments, all of which involve much longer time lags. Experiment 3 addresses long time lag problems with noise and signal on the same input line. Experiments 3a/3b focus on Bengio et al.'s 1994 \2-sequence problem". Because this problem actually can be solved quickly by random weight guessing, we also include a far more di cult 2-sequence problem (3c) which requires to learn real-valued, conditional expectations of noisy targets, given the inputs. Experiments 4 and 5 involve distributed, continuous-valued input representations and require learning to store precise, real values for very long time periods. Relevant input signals can occur at quite di erent positions in input sequences. Again minimal time lags involve hundreds of steps. Similar tasks never have been solved by other recurrent net algorithms. Experiment 6 involves tasks of a di erent complex type that also has not been solved by other recurrent net algorithms. Again, relevant input signals can occur at quite di erent positions in input sequences. The experiment shows that LSTM can extract information conveyed by the temporal order of widely separated inputs. Subsection 5.7 will provide a detailed summary of experimental conditions in two tables for reference.
|
|
5.1 EXPERIMENT 1: EMBEDDED REBER GRAMMAR
|
|
Task. Our rst task is to learn the \embedded Reber grammar", e.g. Smith and Zipser (1989),
|
|
Cleeremans et al. (1989), and Fahlman (1991). Since it allows for training sequences with short time lags (of as few as 9 steps), it is not a long time lag problem. We include it for two reasons: (1) it is a popular recurrent net benchmark used by many authors | we wanted to have at least one experiment where RTRL and BPTT do not fail completely, and (2) it shows nicely how output gates can be bene cial.
|
|
|
|
S
|
|
T B
|
|
|
|
X
|
|
|
|
T S
|
|
|
|
X
|
|
|
|
P
|
|
|
|
B E
|
|
|
|
REBER GRAMMAR
|
|
|
|
T E
|
|
|
|
P
|
|
|
|
V
|
|
|
|
V
|
|
|
|
P
|
|
|
|
P
|
|
|
|
REBER
|
|
|
|
GRAMMAR
|
|
|
|
T
|
|
|
|
Figure 4: Transition diagram for the embedded
|
|
|
|
Figure 3: grammar.
|
|
|
|
Transition diagram
|
|
|
|
for
|
|
|
|
the
|
|
|
|
Reber
|
|
|
|
Reber grammar. Each box represents the Reber grammar (see Figure 3).
|
|
|
|
a
|
|
|
|
copy
|
|
|
|
of
|
|
|
|
Starting at the leftmost node of the directed graph in Figure 4, symbol strings are generated sequentially (beginning with the empty string) by following edges | and appending the associated
|
|
|
|
11
|
|
|
|
symbols to the current string | until the rightmost node is reached. Edges are chosen randomly
|
|
|
|
if there is a choice (probability: 0.5). The net's task is to read strings, one symbol at a time,
|
|
|
|
and to permanently predict the next symbol (error signals occur at every time step). To correctly
|
|
|
|
predict the symbol before last, the net has to remember the second symbol.
|
|
|
|
Comparison. We compare LSTM to \Elman nets trained by Elman's training procedure"
|
|
|
|
(ELM) (results taken from Cleeremans et al. 1989), Fahlman's \Recurrent Cascade-Correlation"
|
|
|
|
(RCC) (results taken from Fahlman 1991), and RTRL (results taken from Smith and Zipser (1989),
|
|
|
|
where only the few successful trials are listed). It should be mentioned that Smith and Zipser
|
|
|
|
actually make the task easier by increasing the probability of short time lag exemplars. We didn't
|
|
|
|
do tThrisaifnorinLgS/TTMes.ting. We use a local input/output representation (7 input units, 7 output
|
|
|
|
units). Following Fahlman, we use 256 training strings and 256 separate test strings. The training
|
|
|
|
set is generated randomly; training exemplars are picked randomly from the training set. Test
|
|
|
|
sequences are generated randomly, too, but sequences already used in the training set are not
|
|
|
|
used for testing. After string presentation, all activations are reinitialized with zeros. A trial is
|
|
|
|
considered successful if all string symbols of all sequences in both test set and training set are
|
|
|
|
predicted correctly | that is, if the output unit(s) corresponding to the possible next symbol(s)
|
|
|
|
is(are) always the most active ones.
|
|
|
|
Architectures. Architectures for RTRL, ELM, RCC are reported in the references listed
|
|
|
|
above. For LSTM, we use 3 (4) memory cell blocks. Each block has 2 (1) memory cells. The
|
|
|
|
output layer's only incoming connections originate at memory cells. Each memory cell and each
|
|
|
|
gate unit receives incoming connections from all memory cells and gate units (the hidden layer is
|
|
|
|
fully connected | less connectivity may work as well). The input layer has forward connections
|
|
|
|
to all units in the hidden layer. The gate units are biased. These architecture parameters make it
|
|
|
|
easy to store at least 3 input signals (architectures 3-2 and 4-1 are employed to obtain comparable
|
|
|
|
numbers of weights for both architectures: 264 for 4-1 and 276 for 3-2). Other parameters may be
|
|
|
|
appropriate as well, however. All sigmoid functions are logistic with output range 0; 1], except for
|
|
|
|
h, whose range is 1; 1], and g, whose range is 2; 2]. All weights are initialized in 0:2; 0:2],
|
|
|
|
except for the output gate biases, which are initialized to -1, -2, and -3, respectively (see abuse
|
|
|
|
problem, solution (2) of Section 4). We tried learning rates of 0.1, 0.2 and 0.5.
|
|
|
|
Results. We use 3 di erent, randomly generated pairs of training and test sets. With each
|
|
|
|
such pair we run 10 trials with di erent initial weights. See Table 1 for results (mean of 30
|
|
|
|
trials). Unlike the other methods, LSTM always learns to solve the task. Even when we ignore
|
|
|
|
the
|
|
|
|
Iumnspuocrcteassnfucletorfiaolsuotpf uthtegoattheesr.
|
|
|
|
approaches, LSTM learns much faster. The experiment provides a nice example
|
|
|
|
where
|
|
|
|
the
|
|
|
|
output
|
|
|
|
gate
|
|
|
|
is truly bene cial. Learning to store the rst T or P should not perturb activations representing
|
|
|
|
the more easily learnable transitions of the original Reber grammar. This is the job of the output
|
|
|
|
gates. Without output gates, we did not achieve fast learning.
|
|
|
|
5.2 EXPERIMENT 2: NOISE-FREE AND NOISY SEQUENCES
|
|
|
|
Task 2a: noise-free sequences with long time lags. There are p + 1 possible input symbols
|
|
|
|
dwehnoosteedi-tah1;c:o::m; appon1e;natp
|
|
|
|
= is
|
|
|
|
1x;(aapll+o1t=heyr.coami ipso\nleonctasllayr"er0e)p.reAsennteetdwbiyththpe+p1+in1p-duitmuennistisonaanldvpec+to1r
|
|
|
|
output units sequentially observes input symbol sequences, one at a time, permanently trying
|
|
|
|
to predict the next symbol | error signals occur at every single time step. To emphasize the
|
|
|
|
\long time lag problem", we use a training set consisting of only two very similar sequences:
|
|
|
|
(y; a1; a2; : : : ; ap 1; y) and (x; a1; a2; : : : ; the nal element, the net has to learn
|
|
|
|
ap to
|
|
|
|
1; x). store
|
|
|
|
Each is selected a representation
|
|
|
|
with probability 0.5. of the rst element
|
|
|
|
To for
|
|
|
|
predict p time
|
|
|
|
steps.
|
|
|
|
We compare \Real-Time Recurrent Learning" for fully recurrent nets (RTRL), \Back-Propa-
|
|
|
|
gation Through Time" (BPTT), the sometimes very successful 2-net \Neural Sequence Chunker"
|
|
|
|
(CH, Schmidhuber 1992b), and our new method (LSTM). In all cases, weights are initialized in
|
|
|
|
-0.2,0.2]. Due to limited computation time, training is stopped after 5 million sequence presen-
|
|
|
|
12
|
|
|
|
method hidden units # weights learning rate % of success success after
|
|
|
|
RTRL
|
|
|
|
3
|
|
|
|
170
|
|
|
|
0.05 \some fraction" 173,000
|
|
|
|
RTRL
|
|
|
|
12
|
|
|
|
ELM
|
|
|
|
15
|
|
|
|
494
|
|
|
|
0.1 \some fraction" 25,000
|
|
|
|
435
|
|
|
|
0
|
|
|
|
>200,000
|
|
|
|
RCC
|
|
|
|
7-9
|
|
|
|
119-198
|
|
|
|
50
|
|
|
|
182,000
|
|
|
|
LSTM 4 blocks, size 1 264
|
|
|
|
0.1
|
|
|
|
100
|
|
|
|
39,740
|
|
|
|
LSTM 3 blocks, size 2 276
|
|
|
|
0.1
|
|
|
|
100
|
|
|
|
21,730
|
|
|
|
LSTM 3 blocks, size 2 276
|
|
|
|
0.2
|
|
|
|
97
|
|
|
|
14,060
|
|
|
|
LSTM 4 blocks, size 1 264
|
|
|
|
0.5
|
|
|
|
97
|
|
|
|
9,500
|
|
|
|
LSTM 3 blocks, size 2 276
|
|
|
|
0.5
|
|
|
|
100
|
|
|
|
8,440
|
|
|
|
Table 1: EXPERIMENT 1: Embedded Reber grammar: percentage of successful trials and number of sequence presentations until success for RTRL (results taken from Smith and Zipser 1989), \Elman net trained by Elman's procedure" (results taken from Cleeremans et al. 1989), \Recurrent Cascade-Correlation" (results taken from Fahlman 1991) and our new approach (LSTM). Weight numbers in the rst 4 rows are estimates | the corresponding papers do not provide all the technical details. Only LSTM almost always learns to solve the task (only two failures out of 150 trials). Even when we ignore the unsuccessful trials of the other approaches, LSTM learns much faster (the number of required training examples in the bottom row varies between 3,800 and 24,100).
|
|
|
|
tations. A successful run is one that ful lls the following criterion: after training, during 10,000
|
|
|
|
successive, randomly chosen input sequences, the maximal absolute error of all output units is
|
|
|
|
always below 0:25.
|
|
|
|
Architectures. RTRL: one self-recurrent hidden unit, p+1 non-recurrent output units. Each
|
|
|
|
layer has connections from all layers below. All units use the logistic activation function sigmoid
|
|
|
|
in 0,1].
|
|
|
|
BPTT: same architecture as the one trained by RTRL.
|
|
|
|
CH: both net architectures like RTRL's, but one has an additional output for predicting the
|
|
|
|
hidden unit of the other one (see Schmidhuber 1992b for details).
|
|
|
|
LSTM: like with RTRL, but the hidden unit is replaced by a memory cell and an input gate
|
|
|
|
(no output gate required). g is the logistic sigmoid, and h is the identity function h : h(x) = x; 8x.
|
|
|
|
Memory cell and input gate are added once the error has stopped decreasing (see abuse problem:
|
|
|
|
solution (1) in Section 4).
|
|
|
|
No
|
|
|
|
Results.
|
|
trial was
|
|
|
|
Using RTRL and successful with p
|
|
|
|
a short = 10.
|
|
|
|
4 time step delay (p = 4), With long time lags, only
|
|
|
|
79thoef
|
|
|
|
all trials were successful. neural sequence chunker
|
|
|
|
and LSTM achieved successful trials, while BPTT and RTRL failed. With p = 100, the 2-net
|
|
|
|
sequence the task.
|
|
|
|
cChoumnkpearrisnoglvseudcctehsesftualstkriianlsoonnlyly,13
|
|
|
|
of all LSTM
|
|
|
|
trials. LSTM, learned much
|
|
|
|
however, always faster. See Table
|
|
|
|
learned to solve 2 for details. It
|
|
|
|
should be mentioned, however, that a hierarchical chunker can also always quickly solve this task
|
|
|
|
(Schmidhuber 1992c, 1993).
|
|
|
|
Task 2b: no local regularities. With the task above, the chunker sometimes learns to
|
|
|
|
correctly predict the nal element, but only because of predictable local regularities in the input
|
|
|
|
stream that allow for compressing the sequence. In an additional, more di cult task (involving
|
|
|
|
many more di erent possible sequences), we remove compressibility by replacing the determin-
|
|
|
|
iibs1et;tiic2a;s1:u;:ba:s2;e;iqp: u: :1e;nacpep(1a. 11;Wga2ea;no: d:b:tf;aa(ixnp;
|
|
next sequence element has to be
|
|
|
|
2a1pi)1rce;lbadaysiics2aet; es: dr:(a:.tn;wTadoiohpmese1ot;ssnxul)boyfsj et1soqetuqaeulniley1cn;ecpie2r(s;eo):df:if:lce(;tynaip;gbatlhei11
|
|
|
|
p 1) over the alpha-
|
|
|
|
;
|
|
|
|
aip2 ;
|
|
|
|
:
|
|
|
|
:1:g; .aiAp g1a;iyn),
|
|
|
|
j1
|
|
every
|
|
|
|
targets, however, are x
|
|
|
|
and y, which occur at sequence ends. Training exemplars are chosen randomly from the 2 classes.
|
|
|
|
Architectures and parameters are the same as in Experiment 2a. A successful run is one that
|
|
|
|
ful lls the following criterion: after training, during 10,000 successive, randomly chosen input
|
|
|
|
13
|
|
|
|
Method RTRL RTRL RTRL RTRL RTRL BPTT CH LSTM
|
|
|
|
Delay p 4 4 4 10 100 100 100 100
|
|
|
|
Learning rate 1.0 4.0 10.0
|
|
1.0-10.0 1.0-10.0 1.0-10.0
|
|
1.0 1.0
|
|
|
|
# weights 36 36 36 144
|
|
10404 10404 10506 10504
|
|
|
|
% Successful trials 78 56 22 0 0 0 33 100
|
|
|
|
Success after 1,043,000 892,000 254,000 > 5,000,000 > 5,000,000 > 5,000,000 32,400 5,040
|
|
|
|
Table 2: Task 2a: Percentage of successful trials and number of training sequences until success, for \Real-Time Recurrent Learning" (RTRL), \Back-Propagation Through Time" (BPTT), neural sequence chunking (CH), and the new method (LSTM). Table entries refer to means of 18 trials. With 100 time step delays, only CH and LSTM achieve successful trials. Even when we ignore the unsuccessful trials of the other approaches, LSTM learns much faster.
|
|
|
|
sequences, the maximal absolute error of all output units is below 0:25 at sequence end.
|
|
|
|
Results. As expected, the chunker failed to solve this task (so did BPTT and RTRL, of
|
|
|
|
course). LSTM, however, was always successful. On average (mean of 18 trials), success for
|
|
|
|
p = 100 was achieved after 5,680 sequence presentations. This demonstrates that LSTM does not
|
|
|
|
require sequence regularities to work well.
|
|
|
|
Task 2c: very long time lags | no local regularities. This is the most di cult task in
|
|
|
|
this subsection. To our knowledge no other recurrent net algorithm can solve it. Now there are p+4
|
|
|
|
possible input symbols denoted a1; :::; ap 1; ap; ap+1 = e; ap+2 = b; ap+3 = x; ap+4 = y. a1; :::; ap
|
|
|
|
are also called \distractor whose ith component is 1
|
|
|
|
symbols". (all other
|
|
|
|
Again, ai is locally represented components are 0). A net with
|
|
|
|
by the p+4-dimensional vector p + 4 input units and 2 output
|
|
|
|
units sequentially observes input symbol sequences, one at a time. Training sequences are randomly
|
|
|
|
itc1rha;oiisn2e;inn: :gf:r;osiemqq+utkehnecueq,ngiwonaeno(df1)tfw(rboa;nvxde;roaymi1s;liyami2gil;ea:nr:e:sr;uaabtiesqe+atks;seoe;fqxsu)eeqnjuc1eencpersie1:;fxi2(b;o;:fy: :;le;aniiq1g+;tahki2q;
|
|
|
|
:
|
|
|
|
:q:g;.aiTq+ok
|
|
|
|
; e; y) j 1
|
|
produce
|
|
|
|
a
|
|
|
|
+ 2, (2) randomly
|
|
|
|
generate a sequence su x of
|
|
|
|
an on
|
|
|
|
tehwe istehcopnrdobealebmilietnyt.110F.orIna
|
|
|
|
additional elements (6= b; e; x; y)
|
|
the latter case, we (3) conclude
|
|
|
|
twhiethsepqruoebnacbeilwitiyth190xoorr,
|
|
|
|
alternatively, y, depending
|
|
|
|
given k, this leads to a uniform distribution on the possible sequences
|
|
|
|
with length q + k + 4. The minimal sequence length is q + 4; the expected length is
|
|
|
|
4
|
|
|
|
+
|
|
|
|
X1
|
|
k=0
|
|
|
|
1 10
|
|
|
|
(
|
|
|
|
9 10
|
|
|
|
)k(q
|
|
|
|
+
|
|
|
|
k)
|
|
|
|
=
|
|
|
|
q
|
|
|
|
+
|
|
|
|
14:
|
|
|
|
The goal
|
|
|
|
expected number of is to predict the last
|
|
|
|
occurrences of symbol, which
|
|
|
|
element ai; 1 always occurs
|
|
|
|
i after
|
|
|
|
p, the
|
|
|
|
in a sequence is \trigger symbol"
|
|
|
|
qe+.p1E0 rrorpqs.igTnahles
|
|
|
|
are generated only at sequence ends. To predict the nal element, the net has to learn to store a
|
|
|
|
representation of the second element for at least q + 1 time steps (until it sees the trigger symbol
|
|
|
|
e). Success is de ned as \prediction error (for nal sequence element) of both output units always
|
|
|
|
below 0:2, for 10,000 successive, randomly chosen input sequences".
|
|
|
|
Architecture/Learning. The net has p + 4 input units and 2 output units. Weights are
|
|
|
|
initialized in -0.2,0.2]. To avoid too much learning time variance due to di erent weight initial-
|
|
|
|
izations, the hidden layer gets two memory cells (two cell blocks of size 1 | although one would
|
|
|
|
be su cient). There are no other hidden units. The output layer receives connections only from
|
|
|
|
memory cells. Memory cells and gate units receive connections from input units, memory cells
|
|
|
|
and gate units (i.e., the hidden layer is fully connected). No bias weights are used. h and g are
|
|
|
|
logistic sigmoids with output ranges 1; 1] and 2; 2], respectively. The learning rate is 0.01.
|
|
|
|
14
|
|
|
|
q (time lag 1) p (# random inputs) pq # weights Success after
|
|
|
|
50
|
|
|
|
50
|
|
|
|
1 364
|
|
|
|
30,000
|
|
|
|
100
|
|
|
|
100
|
|
|
|
1 664
|
|
|
|
31,000
|
|
|
|
200
|
|
|
|
200
|
|
|
|
1 1264
|
|
|
|
33,000
|
|
|
|
500
|
|
|
|
500
|
|
|
|
1 3064
|
|
|
|
38,000
|
|
|
|
1,000
|
|
|
|
1,000
|
|
|
|
1 6064
|
|
|
|
49,000
|
|
|
|
1,000
|
|
|
|
500
|
|
|
|
2 3064
|
|
|
|
49,000
|
|
|
|
1,000
|
|
|
|
200
|
|
|
|
5 1264
|
|
|
|
75,000
|
|
|
|
1,000
|
|
|
|
100
|
|
|
|
10 664
|
|
|
|
135,000
|
|
|
|
1,000
|
|
|
|
50
|
|
|
|
20 364
|
|
|
|
203,000
|
|
|
|
Tnnuuammblbbeeerr3:ooffToaacsvckauirl2arceb:nlecLedSsiTsotMfraacwtgoiirtvhesnyvemdribysotrllsaocn(tgporm+siy4nmimibsoatllhitenimnaeusmelaqbguesernqocfe+.in1Tphauent druingahittlsom)t.oostfpq
|
|
|
|
noise. is the column
|
|
|
|
p is the expected lists the
|
|
|
|
number of training sequences required by LSTM (BPTT, RTRL and the other competitors have
|
|
|
|
no chance of solving this task). If we let the number of distractor symbols (and weights) increase
|
|
|
|
in proportion to the time lag, learning time increases very slowly. The lower block illustrates the
|
|
|
|
expected slow-down due to increased frequency of distractor symbols.
|
|
|
|
Note that the minimal time lag is q + 1 | the net never sees short training sequences facilitating the classi cation of long test sequences.
|
|
Results. 20 trials were made for all tested pairs (p; q). Table 3 lists the mean of the number
|
|
of training sequences required by LSTM to achieve success (BPTT and RTRL have no chance of solving non-trivial tasks with minimal time lags of 1000 steps).
|
|
Scaling. Table 3 shows that if we let the number of input symbols (and weights) increase
|
|
in proportion to the time lag, learning time increases very slowly. This is a another remarkable property of LSTM not shared by any other method we are aware of. Indeed, RTRL and BPTT are far from scaling reasonably | instead, they appear to scale exponentially, and appear quite
|
|
udsisetlDreasiscsttworrhaecsnytmotrhbeoinltsi.mueIennlcacrgeesa.seixInncgeTetdahbiaslseffr3ee,wqtuhaeesncc1yo0ludstmeecpnrse.haesaesdeldeabrnyinpqggsipveesedt,heanexepeecctteddufreeqtouewnceyighotf
|
|
oscillations caused by frequently observed input symbols.
|
|
5.3 EXPERIMENT 3: NOISE AND SIGNAL ON SAME CHANNEL
|
|
This experiment serves to illustrate that LSTM does not encounter fundamental problems if noise and signal are mixed on the same input line. We initially focus on Bengio et al.'s simple 1994 \2-sequence problem"; in Experiment 3c we will then pose a more challenging 2-sequence problem.
|
|
Task 3a (\2-sequence problem"). The task is to observe and then classify input sequences.
|
|
There are two classes, each occurring with probability 0.5. There is only one input line. Only the rst N real-valued sequence elements convey relevant information about the class. Sequence elements at positions t > N are generated by a Gaussian with mean zero and variance 0.2. Case N = 1: the rst sequence element is 1.0 for class 1, and -1.0 for class 2. Case N = 3: the rst three elements are 1.0 for class 1 and -1.0 for class 2. The target at the sequence end is 1.0 for class 1 and 0.0 for class 2. Correct classi cation is de ned as \absolute output error at sequence end below 0.2". Given a constant T, the sequence length is randomly selected between T and T + T/10 (a di erence to Bengio et al.'s problem is that they also permit shorter sequences of length T/2).
|
|
Guessing. Bengio et al. (1994) and Bengio and Frasconi (1994) tested 7 di erent methods
|
|
on the 2-sequence problem. We discovered, however, that random weight guessing easily outper-
|
|
|
|
15
|
|
|
|
T N stop: ST1 stop: ST2 # weights ST2: fraction misclassi ed
|
|
|
|
100 3 27,380 39,850 102
|
|
|
|
0.000195
|
|
|
|
100 1 58,370 64,330 102
|
|
|
|
0.000117
|
|
|
|
1000 3 446,850 452,460 102
|
|
|
|
0.000078
|
|
|
|
Table 4: Task 3a: Bengio et al.'s 2-sequence problem. T is minimal sequence length. N is the number of information-conveying elements at sequence begin. The column headed by ST1 (ST2) gives the number of sequence presentations required to achieve stopping criterion ST1 (ST2). The rightmost column lists the fraction of misclassi ed post-training sequences (with absolute error > 0.2) from a test set consisting of 2560 sequences (tested after ST2 was achieved). All values are means of 10 trials. We discovered, however, that this problem is so simple that random weight guessing solves it faster than LSTM and any other method for which there are published results.
|
|
|
|
forms them all, because the problem is so simple5. See Schmidhuber and Hochreiter (1996) and Hochreiter and Schmidhuber (1996, 1997) for additional results in this vein.
|
|
LSTM architecture. We use a 3-layer net with 1 input unit, 1 output unit, and 3 cell blocks
|
|
of size 1. The output layer receives connections only from memory cells. Memory cells and gate units receive inputs from input units, memory cells and gate units, and have bias weights. Gate units and output unit are logistic sigmoid in 0; 1], h in 1; 1], and g in 2; 2].
|
|
Training/Testing. All weights (except the bias weights to gate units) are randomly initialized
|
|
in the range 0:1; 0:1]. The rst input gate bias is initialized with 1:0, the second with 3:0, and the third with 5:0. The rst output gate bias is initialized with 2:0, the second with 4:0 and the third with 6:0. The precise initialization values hardly matter though, as con rmed by additional experiments. The learning rate is 1.0. All activations are reset to zero at the beginning of a new sequence.
|
|
We stop training (and judge the task as being solved) according to the following criteria: ST1: none of 256 sequences from a randomly chosen test set is misclassi ed. ST2: ST1 is satis ed, and mean absolute test set error is below 0.01. In case of ST2, an additional test set consisting of 2560 randomly chosen sequences is used to determine the fraction of misclassi ed sequences.
|
|
Results. See Table 4. The results are means of 10 trials with di erent weight initializations
|
|
in the range 0:1; 0:1]. LSTM is able to solve this problem, though by far not as fast as random weight guessing (see paragraph \Guessing" above). Clearly, this trivial problem does not provide a very good testbed to compare performance of various non-trivial algorithms. Still, it demonstrates that LSTM does not encounter fundamental problems when faced with signal and noise on the same channel.
|
|
Task 3b. Architecture, parameters, etc. like in Task 3a, but now with Gaussian noise (mean
|
|
0 and variance 0.2) added to the information-conveying elements (t <= N). We stop training (and judge the task as being solved) according to the following, slightly rede ned criteria: ST1: less than 6 out of 256 sequences from a randomly chosen test set are misclassi ed. ST2: ST1 is satis ed, and mean absolute test set error is below 0.04. In case of ST2, an additional test set consisting of 2560 randomly chosen sequences is used to determine the fraction of misclassi ed sequences.
|
|
Results. See Table 5. The results represent means of 10 trials with di erent weight initializa-
|
|
tions. LSTM easily solves the problem.
|
|
Task 3c. Architecture, parameters, etc. like in Task 3a, but with a few essential changes that
|
|
make the task non-trivial: the targets are 0.2 and 0.8 for class 1 and class 2, respectively, and there is Gaussian noise on the targets (mean 0 and variance 0.1; st.dev. 0.32). To minimize mean squared error, the system has to learn the conditional expectations of the targets given the inputs. Misclassi cation is de ned as \absolute di erence between output and noise-free target (0.2 for
|
|
5It should be mentioned, however, that di erent input representations and di erent types of noise may lead to worse guessing performance (Yoshua Bengio, personal communication, 1996).
|
|
16
|
|
|
|
T N stop: ST1 stop: ST2 # weights ST2: fraction misclassi ed
|
|
|
|
100 3 41,740 43,250 102
|
|
|
|
0.00828
|
|
|
|
100 1 74,950 78,430 102
|
|
|
|
0.01500
|
|
|
|
1000 1 481,060 485,080 102
|
|
|
|
0.01207
|
|
|
|
Table 5: Task 3b: modi ed 2-sequence problem. Same as in Table 4, but now the informationconveying elements are also perturbed by noise.
|
|
|
|
T N stop # weights fraction misclassi ed av. di erence to mean
|
|
|
|
100 3 269,650 102
|
|
|
|
0.00558
|
|
|
|
0.014
|
|
|
|
100 1 565,640 102
|
|
|
|
0.00441
|
|
|
|
0.012
|
|
|
|
Table 6: Task 3c: modi ed, more challenging 2-sequence problem. Same as in Table 4, but with noisy real-valued targets. The system has to learn the conditional expectations of the targets given the inputs. The rightmost column provides the average di erence between network output and expected target. Unlike 3a and 3b, this task cannot be solved quickly by random weight guessing.
|
|
|
|
class 1 and 0.8 for class 2) > 0.1. " The network output is considered acceptable if the mean absolute di erence between noise-free target and output is below 0.015. Since this requires high weight precision, Task 3c (unlike 3a and 3b) cannot be solved quickly by random guessing.
|
|
Training/Testing. The learning rate is 0:1. We stop training according to the following
|
|
criterion: none of 256 sequences from a randomly chosen test set is misclassi ed, and mean absolute di erence between noise free target and output is below 0.015. An additional test set consisting of 2560 randomly chosen sequences is used to determine the fraction of misclassi ed sequences.
|
|
Results. See Table 6. The results represent means of 10 trials with di erent weight initial-
|
|
izations. Despite the noisy targets, LSTM still can solve the problem by learning the expected target values.
|
|
5.4 EXPERIMENT 4: ADDING PROBLEM
|
|
The di cult task in this section is of a type that has never been solved by other recurrent net algorithms. It shows that LSTM can solve long time lag problems involving distributed, continuousvalued representations.
|
|
Task. Each element of each input sequence is a pair of components. The rst component
|
|
is a real value randomly chosen from the interval 1; 1]; the second is either 1.0, 0.0, or -1.0, and is used as a marker: at the end of each sequence, the task is to output the sum of the rst rct(swwatoinmolhldopuopsonmeanmirelsanersrntatksgretecodhofmsmptabhapioerorktssnweee(dewnpetaahnisowrtsshefeotelchlamoralwstlitnsXac:irmo1emw)a.mlepsoaTenrhqrkeseuentndetnrwbwacyeneedlscreoeaanmclnglodltnXyohdm2sT)ecl.lyoeaTmcnsthepdleoaeTncnseted+cnaotmnns1T0ddae.rqmckIuonamaorlankpteogoonionv1efeen.n0ttoh.ssfeeoStqfehuqaereusllntercnrteescemtenesaxT2iaphncaaitinvlry1ges pairs are zero except for the rst and nal pair, whose second components are -1. (In the rare case woAnhlseyerqeauttehtnheceesriessqtpupreoaniccreesosefentddh:ceotsrhereqecuttaelrnygceieftgtiheste0s:a5mb+saorXlku1e4t+d:e0,Xew2rer(otsrheetatsXut1mhetXose1zqe+uroeX.n)2ceAscenanlederdrisotrobsetilhgoenwainl0ti.es0r4gv.aenl e0ra; 1te])d.
|
|
Architecture. We use a 3-layer net with 2 input units, 1 output unit, and 2 cell blocks of size
|
|
2. The output layer receives connections only from memory cells. Memory cells and gate units receive inputs from memory cells and gate units (i.e., the hidden layer is fully connected | less connectivity may work as well). The input layer has forward connections to all units in the hidden
|
|
17
|
|
|
|
T minimal lag # weights # wrong predictions Success after
|
|
|
|
100
|
|
|
|
50
|
|
|
|
93
|
|
|
|
1 out of 2560
|
|
|
|
74,000
|
|
|
|
500 250
|
|
|
|
93
|
|
|
|
0 out of 2560
|
|
|
|
209,000
|
|
|
|
1000 500
|
|
|
|
93
|
|
|
|
1 out of 2560
|
|
|
|
853,000
|
|
|
|
Table 7: EXPERIMENT 4: Results for the Adding Problem. T is the minimal sequence length, T=2 the minimal time lag. \# wrong predictions" is the number of incorrectly processed sequences (error > 0.04) from a test set containing 2560 sequences. The rightmost column gives the number of training sequences required to achieve the stopping criterion. All values are means of 10 trials. For T = 1000 the number of required training examples varies between 370,000 and 2,020,000, exceeding 700,000 in only 3 cases.
|
|
|
|
layer. All non-input units have bias weights. These architecture parameters make it easy to store
|
|
|
|
at least 2 input signals (a cell block size of 1 works well, too). All activation functions are logistic
|
|
|
|
with output range 0; 1], except for h, whose range is 1; 1], and g, whose range is 2; 2].
|
|
|
|
State drift versus initial bias. Note that the task requires storing the precise values of
|
|
|
|
real numbers for long durations | the system must learn to protect memory cell contents against
|
|
|
|
even minor internal state drift (see Section 4). To study the signi cance of the drift problem,
|
|
|
|
we make the task even more di cult by biasing all non-input units, thus arti cially inducing
|
|
|
|
internal state drift. All weights (including the bias weights) are randomly initialized in the range
|
|
|
|
0:1; 0:1]. Following Section 4's remedy for state drifts, the rst input gate bias is initialized with
|
|
|
|
3:0, the second with 6:0 (though the precise values hardly matter, as con rmed by additional
|
|
|
|
experiments).
|
|
|
|
Training/Testing. The learning rate is 0.5. Training is stopped once the average training
|
|
|
|
error is below 0.01, and the 2000 most recent sequences were processed correctly.
|
|
|
|
Results. With a test set consisting of 2560 randomly chosen sequences, the average test set
|
|
|
|
error was always below 0.01, and there were never more than 3 incorrectly processed sequences.
|
|
|
|
Table 7 shows details.
|
|
|
|
The experiment demonstrates: (1) LSTM is able to work well with distributed representations.
|
|
|
|
(2) LSTM is able manages to store is no signi cant,
|
|
|
|
hctooanrlmetianfruunlotiunostpevreanrlfuaolerssmtwacittaehlcoduurlitafttd.ieotnesriionrvaotlivoinngfocromntiinniumoaulsdveallauyess.of(3T2)
|
|
|
|
Since the system time steps, there
|
|
|
|
5.5 EXPERIMENT 5: MULTIPLICATION PROBLEM
|
|
|
|
One may argue that LSTM is a bit biased towards tasks such as the Adding Problem from the
|
|
|
|
previous subsection. Solutions to the Adding Problem may exploit the CEC's built-in integration
|
|
|
|
capabilities. Although this CEC property may be viewed as a feature rather than a disadvantage
|
|
|
|
(integration seems to be a natural subtask of many tasks occurring in the real world), the question
|
|
|
|
arises whether LSTM can also solve tasks with inherently non-integrative solutions. To test this,
|
|
|
|
we change the problem by requiring the nal target to equal the product (instead of the sum) of
|
|
|
|
earlier marked inputs.
|
|
|
|
Task. Like the task in Section 5.4, except that the rst component of each pair is a real value
|
|
|
|
randomly chosen from the interval 0; 1]. In the rare case where the rst pair of the input sequence
|
|
|
|
gets marked, we set
|
|
Architecture.
|
|
|
|
X1 to 1.0. The Like in Section
|
|
|
|
target at 5.4. All
|
|
|
|
sequence end is the weights (including
|
|
|
|
product the bias
|
|
|
|
X1 X2. weights) are
|
|
|
|
randomly
|
|
|
|
initialized in the range 0:1; 0:1].
|
|
|
|
Training/Testing. The learning rate is 0.1. We test performance twice: as soon as less
|
|
|
|
than nseq of the 2000 most recent training sequences lead to absolute errors exceeding 0.04, where
|
|
|
|
nseq = 140, and nseq = relevant inputs. It is not
|
|
|
|
13. Why these values? nseq = 140 is enough though to ne-tune the precise
|
|
|
|
su nal
|
|
|
|
cient to outputs.
|
|
|
|
learn storage of the nseq = 13, however,
|
|
|
|
18
|
|
|
|
T minimal lag # weights nseq # wrong predictions MSE Success after
|
|
|
|
100 50
|
|
|
|
93 140 139 out of 2560 0.0223 482,000
|
|
|
|
100 50
|
|
|
|
93 13 14 out of 2560 0.0139 1,273,000
|
|
|
|
Table 8: EXPERIMENT 5: Results for the Multiplication Problem. T is the minimal sequence
|
|
|
|
length, T=2 the minimal time lag. We test on a test set containing 2560 sequences as soon as less
|
|
|
|
than nseq of the 2000 is the number of test
|
|
|
|
most recent training sequences sequences with error > 0.04.
|
|
|
|
lead to error MSE is the
|
|
|
|
> 0.04. \# wrong predictions" mean squared error on the test
|
|
|
|
set. The rightmost column lists numbers of training sequences required to achieve the stopping
|
|
|
|
criterion. All values are means of 10 trials.
|
|
|
|
leads to quite satisfactory results.
|
|
Results. For nseq = 140 (nseq = 13) with a test set consisting of 2560 randomly chosen
|
|
sequences, the average test set error was always below 0.026 (0.013), and there were never more than 170 (15) incorrectly processed sequences. Table 8 shows details. (A net with additional standard hidden units or with a hidden layer above the memory cells may learn the ne-tuning part more quickly.)
|
|
The experiment demonstrates: LSTM can solve tasks involving both continuous-valued representations and non-integrative information processing.
|
|
5.6 EXPERIMENT 6: TEMPORAL ORDER
|
|
In this subsection, LSTM solves other di cult (but arti cial) tasks that have never been solved by previous recurrent net algorithms. The experiment shows that LSTM is able to extract information conveyed by the temporal order of widely separated inputs.
|
|
Task 6a: two relevant, widely separated symbols. The goal is to classify sequences.
|
|
Elements and targets are represented locally (input vectors with only one non-zero bit). The sequence starts with an E, ends with a B (the \trigger symbol") and otherwise consists of randomly
|
|
cXbQhe;otRowsre;enSeYn;s.yU1mT0wbhahoenilcsdshfer2qdo0ume,epnaetcnhnededlsteoe2nntgifsttahhr;eabinst;edcrm;oadmpngdolyeroxamcclheloyoprstdcefehnororbsoteefwntXwobeeeaeltennwmd5eee0Ynnta.sn1Ta0dth06epa0orn.sudilTtei1soh1nea0rsre,et:1ta1arXenis;d4Xrtsa2e!nqtdhuoQaemtn; calyerXecc;ehlaYiotshss!eeenrs R; Y; X ! S; Y; Y ! U.
|
|
Task 6b: three relevant, widely separated symbols. Again, the goal is to classify
|
|
sequences. Elements/targets are represented locally. The sequence starts with an E, ends with a B (the \trigger symbol"), and otherwise consists of randomly chosen symbols from the set lfrseaeanqn;gudbt;eohcnm;cidlseygcrcaelahxnsocdseseoepsmntQlfbyo;erRcthtw;hoSerse;eenUen;e3bVl3ee;mtaAwne;ednBetn4s; 3Ca1,t0wa0pnhodaiscnihttd3iod1ines1sp0reta,1nn;td1dt2ooimsnanrltydahnectd3htooetmmhsealpnytoabcrrhaeetlowsoeeiertndehneebrr6eot6XfwateohnerendYX71.60s.TaahTnneddhesY2er0eqs,u.aetTr2nehci8ees rules are: X; X; X ! Q; X; X; Y ! R; X; Y; X ! S; X; Y; Y ! U; Y; X; X ! V ; Y; X; Y ! A; Y; Y; X ! B; Y; Y; Y ! C.
|
|
There are as many output units as there are classes. Each class is locally represented by a binary target vector with one non-zero component. With both tasks, error signals occur only at the end of a sequence. The sequence is classi ed correctly if the nal absolute error of all output units is below 0.3.
|
|
Architecture. We use a 3-layer net with 8 input units, 2 (3) cell blocks of size 2 and 4
|
|
(8) output units for Task 6a (6b). Again all non-input units have bias weights, and the output layer receives connections from memory cells only. Memory cells and gate units receive inputs from input units, memory cells and gate units (i.e., the hidden layer is fully connected | less connectivity may work as well). The architecture parameters for Task 6a (6b) make it easy to
|
|
|
|
19
|
|
|
|
store at least 2 (3) input signals. All activation functions are logistic with output range 0; 1], except for h, whose range is 1; 1], and g, whose range is 2; 2].
|
|
Training/Testing. The learning rate is 0.5 (0.1) for Experiment 6a (6b). Training is stopped
|
|
once the average training error falls below 0.1 and the 2000 most recent sequences were classi ed correctly. All weights are initialized in the range 0:1; 0:1]. The rst input gate bias is initialized with 2:0, the second with 4:0, and (for Experiment 6b) the third with 6:0 (again, we con rmed by additional experiments that the precise values hardly matter).
|
|
Results. With a test set consisting of 2560 randomly chosen sequences, the average test set
|
|
error was always below 0.1, and there were never more than 3 incorrectly classi ed sequences. Table 9 shows details.
|
|
The experiment shows that LSTM is able to extract information conveyed by the temporal order of widely separated inputs. In Task 6a, for instance, the delays between rst and second relevant input and between second relevant input and sequence end are at least 30 time steps.
|
|
|
|
task # weights # wrong predictions Success after
|
|
|
|
Task 6a 156 Task 6b 308
|
|
|
|
1 out of 2560 2 out of 2560
|
|
|
|
31,390 571,100
|
|
|
|
Table 9: EXPERIMENT 6: Results for the Temporal Order Problem. \# wrong predictions" is the number of incorrectly classi ed sequences (error > 0.3 for at least one output unit) from a test set containing 2560 sequences. The rightmost column gives the number of training sequences required to achieve the stopping criterion. The results for Task 6a are means of 20 trials; those for Task 6b of 10 trials.
|
|
|
|
Typical solutions. In Experiment 6a, how does LSTM distinguish between temporal orders
|
|
(X; Y ) and (Y; X)? One of many possible solutions is to store the rst X or Y in cell block 1, and the second X=Y in cell block 2. Before the rst X=Y occurs, block 1 can see that it is still empty by means of its recurrent connections. After the rst X=Y , block 1 can close its input gate. Once block 1 is lled and closed, this fact will become visible to block 2 (recall that all gate units and all memory cells receive connections from all non-output units).
|
|
Typical solutions, however, require only one memory cell block. The block stores the rst X or Y ; once the second X=Y occurs, it changes its state depending on the rst stored symbol. Solution type 1 exploits the connection between memory cell output and input gate unit | the following events cause di erent input gate activations: \X occurs in conjunction with a lled block"; \X occurs in conjunction with an empty block". Solution type 2 is based on a strong positive connection between memory cell output and memory cell input. The previous occurrence of X (Y ) is represented by a positive (negative) internal state. Once the input gate opens for the second time, so does the output gate, and the memory cell output is fed back to its own input. This causes (X; Y ) to be represented by a positive internal state, because X contributes to the new internal state twice (via current internal state and cell output feedback). Similarly, (Y; X) gets represented by a negative internal state.
|
|
5.7 SUMMARY OF EXPERIMENTAL CONDITIONS
|
|
The two tables in this subsection provide an overview of the most important LSTM parameters and architectural details for Experiments 1{6. The conditions of the simple experiments 2a and 2b di er slightly from those of the other, more systematic experiments, due to historical reasons.
|
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Task p lag b s in out w c ogb igb bias h g 1-1 9 9 4 1 7 7 264 F -1,-2,-3,-4 r ga h1 g2 0.1 1-2 9 9 3 2 7 7 276 F -1,-2,-3 r ga h1 g2 0.1
|
|
to be continued on next page
|
|
|
|
20
|
|
|
|
continued from previous page
|
|
|
|
1 2 3 4 5 6 7 8 9 10
|
|
|
|
11 12 13 14 15
|
|
|
|
Task p lag b s in out w c ogb igb bias h g
|
|
|
|
1-3 9 9 3 2 7 7 276 F -1,-2,-3 r ga h1 g2 0.2
|
|
|
|
1-4 9 9 4 1 7 7 264 F -1,-2,-3,-4 r ga h1 g2 0.5
|
|
|
|
1-5 9 9 3 2 7 7 276 F -1,-2,-3 r ga h1 g2 0.5
|
|
|
|
2a 100 100 1 1 101 101 10504 B no og none none id g1 1.0
|
|
|
|
2b 100 100 1 1 101 101 10504 B no og none none id g1 1.0
|
|
|
|
2c-1 50 50 2 1 54 2 364 F none none none h1 g2 0.01
|
|
|
|
2c-2 100 100 2 1 104 2 664 F none none none h1 g2 0.01
|
|
|
|
2c-3 200 200 2 1 204 2 1264 F none none none h1 g2 0.01
|
|
|
|
2c-4 500 500 2 1 504 2 3064 F none none none h1 g2 0.01
|
|
|
|
2c-5 1000 1000 2 1 1004 2 6064 F none none none h1 g2 0.01
|
|
|
|
2c-6 1000 1000 2 1 504 2 3064 F none none none h1 g2 0.01
|
|
|
|
2c-7 1000 1000 2 1 204 2 1264 F none none none h1 g2 0.01
|
|
|
|
2c-8 1000 1000 2 1 104 2 664 F none none none h1 g2 0.01
|
|
|
|
2c-9 1000 1000 2 1 54 2 364 F none none none h1 g2 0.01
|
|
|
|
3a 100 100 3 1 1 1 102 F -2,-4,-6 -1,-3,-5 b1 h1 g2 1.0
|
|
|
|
3b 100 100 3 1 1 1 102 F -2,-4,-6 -1,-3,-5 b1 h1 g2 1.0
|
|
|
|
3c 100 100 3 1 1 1 102 F -2,-4,-6 -1,-3,-5 b1 h1 g2 0.1
|
|
|
|
4-1 100 50 2 2 2 1 93 F r
|
|
|
|
-3,-6 all h1 g2 0.5
|
|
|
|
4-2 500 250 2 2 2 1 93 F r
|
|
|
|
-3,-6 all h1 g2 0.5
|
|
|
|
4-3 1000 500 2 2 2 1 93 F r
|
|
|
|
-3,-6 all h1 g2 0.5
|
|
|
|
5 100 50 2 2 2 1 93 F r
|
|
|
|
r all h1 g2 0.1
|
|
|
|
6a 100 40 2 2 8 4 156 F r
|
|
|
|
-2,-4 all h1 g2 0.5
|
|
|
|
6b 100 24 3 2 8 8 308 F r -2,-4,-6 all h1 g2 0.1
|
|
|
|
Table 10: Summary of experimental conditions for LSTM, Part I. 1st column: task number. 2nd column: minimal sequence length p. 3rd column: minimal number of steps between most recent relevant input information and teacher signal. 4th column: number of cell blocks b. 5th column: block size s. 6th column: number of input units in. 7th column: number of output units out. 8th column: number of weights w. 9th column: c describes connectivity: \F" means \output layer receives connections from memory cells; memory cells and gate units receive connections from input units, memory cells and gate units"; \B" means \each layer receives connections from all layers below". 10th column: initial output gate bias ogb, where \r" stands for \randomly chosen from the interval 0:1; 0:1]" and \no og" means \no output gate used". 11th column: initial input gate bias igb (see 10th column). 12th column: which units have bias weights? \b1" stands for \all hidden units", \ga" for \only gate units", and \all" for \all non-input units". 13th column: the function h, where \id" is identity function, \h1" is logistic sigmoid in 2; 2]. 14th column: the logistic function g, where \g1" is sigmoid in 0; 1], \g2" in 1; 1]. 15th column: learning rate .
|
|
|
|
12 Task select
|
|
1 t1 2a t1 2b t2 2c t2 3a t3 3b t3 3c t3 4 t3 5 t3 6a t3 6b t3
|
|
|
|
3 interval 0:2; 0:2] 0:2; 0:2] 0:2; 0:2] 0:2; 0:2] 0:1; 0:1] 0:1; 0:1] 0:1; 0:1] 0:1; 0:1] 0:1; 0:1] 0:1; 0:1] 0:1; 0:1]
|
|
|
|
4 test set size
|
|
256 no test set
|
|
10000 10000 2560 2560 2560 2560 2560 2560 2560
|
|
|
|
5 stopping criterion training & test correctly pred. after 5 million exemplars after 5 million exemplars after 5 million exemplars ST1 and ST2 (see text) ST1 and ST2 (see text) ST1 and ST2 (see text)
|
|
ST3(0.01) see text ST3(0.1) ST3(0.1)
|
|
|
|
6 success see text ABS(0.25) ABS(0.25) ABS(0.2) ABS(0.2) ABS(0.2) see text ABS(0.04) ABS(0.04) ABS(0.3) ABS(0.3)
|
|
|
|
21
|
|
|
|
Table 11: Summary of experimental conditions for LSTM, Part II. 1st column: task number. 2nd column: training exemplar selection, where \t1" stands for \randomly chosen from training set", \t2" for \randomly chosen from 2 classes", and \t3" for \randomly generated on-line". 3rd column: weight initialization interval. 4th column: test set size. 5th column: stopping criterion for training, where \ST3( )" stands for \average training error below and the 2000 most recent sequences were processed correctly". 6th column: success (correct classi cation) criterion, where \ABS( )" stands for \absolute error of all output units at sequence end is below ".
|
|
6 DISCUSSION
|
|
Limitations of LSTM.
|
|
The particularly e cient truncated backprop version of the LSTM algorithm will not easily solve problems similar to \strongly delayed XOR problems", where the goal is to compute the XOR of two widely separated inputs that previously occurred somewhere in a noisy sequence. The reason is that storing only one of the inputs will not help to reduce the expected error | the task is non-decomposable in the sense that it is impossible to incrementally reduce the error by rst solving an easier subgoal. In theory, this limitation can be circumvented by using the full gradient (perhaps with additional conventional hidden units receiving input from the memory cells). But we do not recommend computing the full gradient for the following reasons: (1) It increases computational complexity. (2) Constant error ow through CECs can be shown only for truncated LSTM. (3) We actually did conduct a few experiments with non-truncated LSTM. There was no signi cant di erence to truncated LSTM, exactly because outside the CECs error ow tends to vanish quickly. For the same reason full BPTT does not outperform truncated BPTT. Each memory cell block needs two additional units (input and output gate). In comparison to standard recurrent nets, however, this does not increase the number of weights by more than a factor of 9: each conventional hidden unit is replaced by at most 3 units in the LSTM architecture, increasing the number of weights by a factor of 32 in the fully connected case. Note, however, that our experiments use quite comparable weight numbers for the architectures of LSTM and competing approaches. Generally speaking, due to its constant error ow through CECs within memory cells, LSTM runs into problems similar to those of feedforward nets seeing the entire input string at once. For instance, there are tasks that can be quickly solved by random weight guessing but not by the truncated LSTM algorithm with small weight initializations, such as the 500-step parity problem (see introduction to Section 5). Here, LSTM's problems are similar to the ones of a feedforward net with 500 inputs, trying to solve 500-bit parity. Indeed LSTM typically behaves much like a feedforward net trained by backprop that sees the entire input. But that's also precisely why it so clearly outperforms previous approaches on many non-trivial tasks with signi cant search spaces. LSTM does not have any problems with the notion of \recency" that go beyond those of other approaches. All gradient-based approaches, however, su er from practical inability to precisely count discrete time steps. If it makes a di erence whether a certain signal occurred 99 or 100 steps ago, then an additional counting mechanism seems necessary. Easier tasks, however, such as one that only requires to make a di erence between, say, 3 and 11 steps, do not pose any problems to LSTM. For instance, by generating an appropriate negative connection between memory cell output and input, LSTM can give more weight to recent inputs and learn decays where necessary.
|
|
22
|
|
|
|
Advantages of LSTM.
|
|
The constant error backpropagation within memory cells results in LSTM's ability to bridge very long time lags in case of problems similar to those discussed above. For long time lag problems such as those discussed in this paper, LSTM can handle noise, distributed representations, and continuous values. In contrast to nite state automata or hidden Markov models LSTM does not require an a priori choice of a nite number of states. In principle it can deal with unlimited state numbers. For problems discussed in this paper LSTM generalizes well | even if the positions of widely separated, relevant inputs in the input sequence do not matter. Unlike previous approaches, ours quickly learns to distinguish between two or more widely separated occurrences of a particular element in an input sequence, without depending on appropriate short time lag training exemplars. There appears to be no need for parameter ne tuning. LSTM works well over a broad range of parameters such as learning rate, input gate bias and output gate bias. For instance, to some readers the learning rates used in our experiments may seem large. However, a large learning rate pushes the output gates towards zero, thus automatically countermanding its own negative e ects. The LSTM algorithm's update complexity per weight and time step is essentially that of BPTT, namely O(1). This is excellent in comparison to other approaches such as RTRL. Unlike full BPTT, however, LSTM is local in both space and time.
|
|
|
|
7 CONCLUSION
|
|
|
|
Each memory cell's internal architecture guarantees constant error ow within its constant error
|
|
|
|
carrousel CEC, provided that truncated backprop cuts o error ow trying to leak out of memory
|
|
|
|
cells. This represents the basis for bridging very long time lags. Two gate units learn to open and
|
|
|
|
close access to error ow within each memory cell's CEC. The multiplicative input gate a ords
|
|
|
|
protection of the CEC from perturbation by irrelevant inputs. Likewise, the multiplicative output
|
|
|
|
gateFuprtoutreectws ootrhke.r
|
|
|
|
units from perturbation by To nd out about LSTM's
|
|
|
|
currently practical
|
|
|
|
irrelevant memory contents. limitations we intend to apply
|
|
|
|
it
|
|
|
|
to
|
|
|
|
real
|
|
|
|
world data. Application areas will include (1) time series prediction, (2) music composition, and
|
|
|
|
(3) speech processing. It will also be interesting to augment sequence chunkers (Schmidhuber
|
|
|
|
1992b, 1993) by LSTM to combine the advantages of both.
|
|
|
|
8 ACKNOWLEDGMENTS
|
|
Thanks to Mike Mozer, Wilfried Brauer, Nic Schraudolph, and several anonymous referees for valuable comments and suggestions that helped to improve a previous version of this paper (Hochreiter and Schmidhuber 1995). This work was supported by DFG grant SCHM 942/3-1 from \Deutsche Forschungsgemeinschaft".
|
|
|
|
APPENDIX
|
|
|
|
A.1 ALGORITHM DETAILS
|
|
|
|
In what follows, the index k ranges over output units, i ranges over hidden
|
|
|
|
the j-th memory arbitrary units, t
|
|
|
|
cell block, ranges over
|
|
|
|
cavjll
|
|
|
|
denotes the v-th unit time steps of a given
|
|
|
|
of memory cell input sequence.
|
|
|
|
block
|
|
|
|
cujn, itus;,l;cmj ssttaanndds
|
|
|
|
for for
|
|
|
|
23
|
|
|
|
The gate unit logistic sigmoid (with range 0; 1]) used in the experiments is
|
|
|
|
f (x)
|
|
|
|
=
|
|
|
|
1
|
|
|
|
+
|
|
|
|
1 exp(
|
|
|
|
x)
|
|
|
|
.
|
|
|
|
(3)
|
|
|
|
The function h (with range 1; 1]) used in the experiments is
|
|
|
|
h(x)
|
|
|
|
=
|
|
|
|
1
|
|
|
|
+
|
|
|
|
2 exp(
|
|
|
|
x)
|
|
|
|
1.
|
|
|
|
(4)
|
|
|
|
The function g (with range 2; 2]) used in the experiments is
|
|
|
|
g(x)
|
|
|
|
=
|
|
|
|
1
|
|
|
|
+
|
|
|
|
4 exp(
|
|
|
|
x)
|
|
|
|
2.
|
|
|
|
(5)
|
|
|
|
Forward pass.
|
|
|
|
The net input and the activation of hidden unit i are
|
|
|
|
neti(t) = X wiuyu(t 1)
|
|
|
|
(6)
|
|
|
|
u
|
|
|
|
yi(t) = fi(neti(t)) .
|
|
|
|
The net input and the activation of inj are
|
|
|
|
netinj (t) = X winjuyu(t 1)
|
|
|
|
(7)
|
|
|
|
u
|
|
|
|
yinj (t) = finj (netinj (t)) .
|
|
|
|
The net input and the activation of outj are
|
|
|
|
netoutj(t) = X woutjuyu(t 1)
|
|
|
|
(8)
|
|
|
|
u
|
|
|
|
youtj (t) = foutj (netoutj (t)) .
|
|
|
|
cellTohf emneemtoirnypucetllnebtlcovjc,ktchje
|
|
|
|
internal are:
|
|
|
|
state
|
|
|
|
scvj ,
|
|
|
|
and
|
|
|
|
the
|
|
|
|
output
|
|
|
|
activation
|
|
|
|
ycvj
|
|
|
|
of
|
|
|
|
the
|
|
|
|
v-th
|
|
|
|
memory
|
|
|
|
netcvj (t)
|
|
|
|
=
|
|
|
|
X
|
|
u
|
|
|
|
wcvj uyu(t
|
|
|
|
1)
|
|
|
|
(9)
|
|
|
|
scvj (t) = scvj (t 1) + yinj (t)g netcvj (t)
|
|
|
|
ycvj (t) = youtj (t)h(scvj (t)) .
|
|
|
|
The net input and the activation of output unit k are
|
|
|
|
netk(t)
|
|
|
|
=
|
|
|
|
u:
|
|
|
|
u
|
|
|
|
X
|
|
not a
|
|
|
|
gate
|
|
|
|
wkuyu(t
|
|
|
|
1)
|
|
|
|
yk(t) = fk(netk(t)) .
|
|
|
|
The backward pass to be described later is based on the following truncated backprop formulae.
|
|
Approximate derivatives for truncated backprop. The truncated version (see Section 4)
|
|
only approximates the partial derivatives, which is re ected by the \ tr" signs in the notation below. It truncates error ow once it leaves memory cells or gate units. Truncation ensures that there are no loops across which an error that left some memory cell through its input or input gate can reenter the cell through its output or output gate. This in turn ensures constant error ow through the memory cell's CEC.
|
|
|
|
24
|
|
|
|
In the truncated backprop version, the following derivatives are replaced by zero:
|
|
|
|
@netinj (t) @yu(t 1)
|
|
|
|
tr 0 8u;
|
|
|
|
@netoutj (t) @yu(t 1)
|
|
|
|
tr 0 8u;
|
|
|
|
and
|
|
|
|
@ netcj (t) @yu(t 1)
|
|
|
|
tr 0 8u:
|
|
|
|
Therefore we get
|
|
|
|
@yinj (t) @yu(t 1)
|
|
|
|
=
|
|
|
|
fi0nj
|
|
|
|
(netinj
|
|
|
|
(t))
|
|
|
|
@netinj @yu(t
|
|
|
|
(t) 1)
|
|
|
|
tr 0 8u;
|
|
|
|
@youtj (t) @yu(t 1)
|
|
|
|
=
|
|
|
|
fo0utj
|
|
|
|
(netoutj
|
|
|
|
(t))
|
|
|
|
@netoutj (t) @yu(t 1)
|
|
|
|
tr 0 8u;
|
|
|
|
and
|
|
|
|
@ycj @yu(t
|
|
|
|
(t) 1)
|
|
|
|
=
|
|
|
|
@ ycj (t) @ netoutj (t)
|
|
|
|
@netoutj @yu(t
|
|
|
|
(t) 1)
|
|
|
|
+
|
|
|
|
@
|
|
|
|
@ycj (t) netinj (t)
|
|
|
|
@netinj @yu(t
|
|
|
|
(t) 1)
|
|
|
|
+
|
|
|
|
@ycj (t) @ netcj (t)
|
|
|
|
@netcj @yu(t
|
|
|
|
(t) 1)
|
|
|
|
tr 0 8u:
|
|
|
|
This implies for all wlm not on connections to cvj ; inj; outj (that is, l 62 fcvj ; inj; outjg):
|
|
|
|
@ycvj (t) @wlm
|
|
|
|
=
|
|
|
|
X
|
|
u
|
|
|
|
@ycvj (t) @yu(t 1)
|
|
|
|
@yu(t @wlm
|
|
|
|
1)
|
|
|
|
tr 0:
|
|
|
|
The truncated derivatives of output unit k are:
|
|
|
|
@yk(t) @wlm
|
|
|
|
=
|
|
|
|
fk0 fk0
|
|
i: fk0
|
|
|
|
(((innnheeeiXdtttkkkd(((etntt)))u)))n0@8>>><>>>:ituXw:Pj kuiiXv:Sn@=Xojyi1t@iha(wictdgPvjldalmwetSven1k=ju)cw1nvjk+iww@tukkyw@ccck@vjvjyvjkylw@y(u@@imtyy@m(wlm@@tcc(y@vjvjl(twwimwt(((ll1tttmmlm)11111)+1))))+)!Xjlk==lyiminn(jjtlO+lR1o)otl!uh=teljrll=ow=turXvicSst=kvjejj1
|
|
|
|
wkcvj ,
|
|
|
|
@ycvj (t @wlm
|
|
|
|
(10) 1) +
|
|
|
|
wcehllerbelockisctjh. eTKherotnruecnkcearteddeldtaer(ivaabtiv=es1oiff
|
|
|
|
a a
|
|
|
|
h=idbdeanndun0itoithtehrawtisise)n, oatnpdaSrtj
|
|
|
|
is of
|
|
|
|
the size of memory a memory cell are:
|
|
|
|
@yi(t) @wlm
|
|
|
|
=
|
|
|
|
fi0(neti(t))
|
|
|
|
@neti(t) @wlm
|
|
|
|
tr lifi0(neti(t))ym(t
|
|
|
|
1) .
|
|
|
|
(11)
|
|
|
|
(Note: here it would be possible to use the full gradient without a ecting constant error ow through internal states of memory cells.)
|
|
|
|
25
|
|
|
|
Cell block cj's truncated derivatives are:
|
|
|
|
@yinj (t) @wlm
|
|
|
|
=
|
|
|
|
fi0nj
|
|
|
|
(netinj
|
|
|
|
(t))
|
|
|
|
@netinj (t) @wlm
|
|
|
|
tr injlfi0nj (netinj (t))ym(t
|
|
|
|
1) .
|
|
|
|
(12)
|
|
|
|
@youtj (t) @wlm
|
|
|
|
=
|
|
|
|
fo0utj
|
|
|
|
(netoutj
|
|
|
|
(t))
|
|
|
|
@netoutj @wlm
|
|
|
|
(t)
|
|
|
|
tr outjlfo0utj (netoutj (t))ym(t
|
|
|
|
1) .
|
|
|
|
(13)
|
|
|
|
@scvj (t) @wlm
|
|
|
|
=
|
|
|
|
@scvj (t @wlm
|
|
|
|
1)
|
|
|
|
+
|
|
|
|
@yinj (t) @wlm
|
|
|
|
g
|
|
|
|
netcvj (t)
|
|
|
|
+ yinj (t)g0
|
|
|
|
netcvj (t)
|
|
|
|
@netcvj (t) @wlm
|
|
|
|
tr
|
|
|
|
(14)
|
|
|
|
injl + cvj l
|
|
|
|
@scvj (t @wlm
|
|
|
|
1)
|
|
|
|
+
|
|
|
|
inj
|
|
|
|
l
|
|
|
|
@yinj (t) @wlm
|
|
|
|
g
|
|
|
|
netcvj (t)
|
|
|
|
+
|
|
|
|
cvj lyinj (t)g0
|
|
|
|
netcvj (t)
|
|
|
|
@netcvj (t) @wlm
|
|
|
|
=
|
|
|
|
injl + cvj l
|
|
|
|
@scvj (t @wlm
|
|
|
|
1)
|
|
|
|
+
|
|
|
|
injlfi0nj (netinj (t)) g
|
|
|
|
netcvj (t)
|
|
|
|
ym(t
|
|
|
|
1) +
|
|
|
|
cvj l yinj (t) g0 netcvj (t) ym(t 1) .
|
|
|
|
@ycvj (t) @wlm
|
|
|
|
=
|
|
|
|
@youtj (t) @wlm
|
|
|
|
h(scvj
|
|
|
|
(t))
|
|
|
|
+
|
|
|
|
h0(scvj
|
|
|
|
(t))
|
|
|
|
@scvj (t) @wlm
|
|
|
|
youtj
|
|
|
|
(t)
|
|
|
|
tr
|
|
|
|
(15)
|
|
|
|
outj l
|
|
|
|
@ youtj (t) @wlm
|
|
|
|
h(scvj
|
|
|
|
(t))
|
|
|
|
+
|
|
|
|
injl +
|
|
|
|
cvj l
|
|
|
|
h0(scvj
|
|
|
|
(t))
|
|
|
|
@scvj (t) @wlm
|
|
|
|
youtj
|
|
|
|
(t)
|
|
|
|
.
|
|
|
|
To e ciently update the system at time t, the only (truncated) derivatives that need to be stored at time t 1 are @s@cvjw(ltm 1), where l = cvj or l = inj.
|
|
|
|
Backward pass. We will describe the backward pass only for the particularly e cient \truncated
|
|
|
|
gradient version" of the LSTM algorithm. For simplicity we will use equal signs even where
|
|
|
|
approximations are made according to the truncated backprop equations above.
|
|
|
|
The squared error at time t is given by
|
|
|
|
E(t) = X
|
|
|
|
tk(t) yk(t) 2 ,
|
|
|
|
(16)
|
|
|
|
k: k output unit
|
|
|
|
where tk(t) is output unit k's target at time t. Time t's contribution to wlm's gradient-based update with learning rate is
|
|
|
|
wlm(t) =
|
|
|
|
@E(t) @wlm
|
|
|
|
.
|
|
|
|
(17)
|
|
|
|
We de ne some unit l's error at time step t by
|
|
|
|
el(t) :=
|
|
|
|
@E(t) @netl(t)
|
|
|
|
.
|
|
|
|
(18)
|
|
|
|
Using (almost) standard backprop, we rst compute updates for weights to output units (l = k), weights to hidden units (l = i) and weights to output gates (l = outj). We obtain (compare formulae (10), (11), (13)):
|
|
|
|
l = k (output) : ek(t) = fk0 (netk(t)) tk(t) yk(t) ,
|
|
|
|
(19)
|
|
|
|
l = i (hidden) :
|
|
|
|
ei(t)
|
|
|
|
=
|
|
|
|
fi0(neti(t)) k:
|
|
|
|
k
|
|
|
|
X
|
|
output
|
|
|
|
unit
|
|
|
|
wkiek(t)
|
|
|
|
,
|
|
|
|
(20)
|
|
|
|
26
|
|
|
|
eoutj (t)
|
|
|
|
=
|
|
|
|
fo0utj
|
|
|
|
(netoutj
|
|
|
|
(t))
|
|
|
|
0@XSj
|
|
v=1
|
|
|
|
h(scvj
|
|
|
|
(t))
|
|
|
|
k:
|
|
|
|
l
|
|
k
|
|
|
|
o=uXtpouuttuj n(itouwtkpcuvjtekg(att)e1As)
|
|
|
|
: .
|
|
|
|
(21)
|
|
|
|
For all possible l time t's contribution to wlm's update is
|
|
|
|
wlm(t) = el(t) ym(t 1) .
|
|
|
|
(22)
|
|
|
|
The remaining updates for weights conventional. We de ne some internal
|
|
|
|
to input gates (l state scvj 's error:
|
|
|
|
=
|
|
|
|
inj
|
|
|
|
)
|
|
|
|
and
|
|
|
|
to
|
|
|
|
cell
|
|
|
|
units
|
|
|
|
(l
|
|
|
|
=
|
|
|
|
cvj )
|
|
|
|
are
|
|
|
|
less
|
|
|
|
escvj
|
|
|
|
:=
|
|
|
|
@E(t) @scvj (t)
|
|
|
|
=
|
|
|
|
foutj (netoutj (t))
|
|
|
|
h0(scvj (t)) k:
|
|
|
|
X
|
|
k output unit
|
|
|
|
wkcvj ek(t) .
|
|
|
|
(23)
|
|
|
|
We obtain for l = inj or l = cvj ; v = 1; : : : ; Sj
|
|
|
|
@E(t) @wlm
|
|
|
|
=
|
|
|
|
XSj
|
|
v=1
|
|
|
|
escvj
|
|
|
|
(t)
|
|
|
|
@scvj (t) @wlm
|
|
|
|
.
|
|
|
|
(24)
|
|
|
|
The derivatives of the internal states with respect to weights and the corresponding weight updates are as follows (compare expression (14)):
|
|
|
|
l = inj (input gates) :
|
|
|
|
(25)
|
|
|
|
@scvj (t) @ winj m
|
|
|
|
=
|
|
|
|
@scvj (t 1) @ winj m
|
|
|
|
+ g(netcvj (t)) fi0nj(netinj (t)) ym(t
|
|
|
|
1) ;
|
|
|
|
therefore time t's contribution to winjm's update is (compare expression (10)):
|
|
|
|
winjm(t) =
|
|
|
|
XSj
|
|
v=1
|
|
|
|
escvj
|
|
|
|
(t)
|
|
|
|
@scvj (t) @ winj m
|
|
|
|
.
|
|
|
|
(26)
|
|
|
|
Similarly we get (compare expression (14)):
|
|
|
|
l = cvj (memory cells) :
|
|
|
|
(27)
|
|
|
|
@scvj (t) @wcvj m
|
|
|
|
=
|
|
|
|
@scvj (t 1) @wcvj m
|
|
|
|
+ g0(netcvj (t)) finj(netinj (t)) ym(t
|
|
|
|
1) ;
|
|
|
|
therefore time t's contribution to wcvjm's update is (compare expression (10)):
|
|
|
|
wcvj m(t) =
|
|
|
|
escvj
|
|
|
|
(t)
|
|
|
|
@scvj (t) @wcvj m
|
|
|
|
.
|
|
|
|
(28)
|
|
|
|
All we need to implement for the backward pass are equations (19), (20), (21), (22), (23), (25), (26), (27), (28). Each weight's total update is the sum of the contributions of all time steps.
|
|
Computational complexity. LSTM's update complexity per time step is
|
|
|
|
O(KH + KCS + HI + CSI) = O(W );
|
|
|
|
(29)
|
|
|
|
where K is the number of output units, C is the number of memory cell blocks, S > 0 is the size of the memory cell blocks, H is the number of hidden units, I is the (maximal) number of units forward-connected to memory cells, gate units and hidden units, and
|
|
|
|
W = KH + KCS + CSI + 2CI + HI = O(KH + KCS + CSI + HI)
|
|
|
|
27
|
|
|
|
is the number of weights. Expression (29) is obtained by considering all computations of the
|
|
|
|
backward pass: equation (19) needs K steps; (20) needs KH steps; (21) needs KSC steps; (22)
|
|
|
|
needs K(H + C) steps for output units, HI steps for hidden units, CI steps for output gates;
|
|
|
|
(23) needs KCS steps; (25) needs CSI steps; (26) needs CSI steps; (27) needs CSI steps;
|
|
|
|
(28) needs CSI steps. The total is K + 2KH + KC + 2KSC + HI + CI + 4CSI steps, or
|
|
|
|
O(KH + KSC + HI + CSI) steps. We conclude: LSTM algorithm's update complexity per time
|
|
|
|
step is just like BPTT's for a fully At a given time step, only the
|
|
need to be stored. Hence LSTM's
|
|
|
|
recurrent net. 2CSI most recent storage complexity
|
|
|
|
a@@lwssoclmvj isvOal(uWes)
|
|
|
|
from | it
|
|
|
|
equations does not
|
|
|
|
(25) and (27) depend on the
|
|
|
|
input sequence length.
|
|
|
|
A.2 ERROR FLOW
|
|
|
|
We compute how much an error signal is scaled while owing back through a memory cell for q time steps. As a by-product, this analysis recon rms that the error ow within a memory cell's CEC is indeed constant, provided that truncated backprop cuts o error ow trying to leave memory cells (see also Section 3.2). The analysis also highlights a potential for undesirable long-term drifts of sgcajte(sse(ese(e2)(3b)ebloewlo)w, )a.s well as the bene cial, countermanding in uence of negatively biased input
|
|
Using the truncated backprop learning rule, we obtain
|
|
|
|
@scj(t k)
|
|
|
|
@scj(t k 1)
|
|
|
|
1+
|
|
|
|
@yinj (t @scj(t k
|
|
|
|
k)1)g netcj(t k) + yinj(t k)g0 netcj(t
|
|
|
|
yinj (t
|
|
|
|
X
|
|
1+
|
|
u
|
|
|
|
@yinj (t @yu(t k
|
|
|
|
k)g0 netcj(t k)
|
|
|
|
k) @yu(t k
|
|
|
|
1)
|
|
X
|
|
|
|
@scj (t @netcj
|
|
|
|
k (t
|
|
|
|
u @yu(t k
|
|
|
|
k)
|
|
|
|
@netcj (t @scj(t k
|
|
|
|
k) 1)
|
|
|
|
1) 1)
|
|
|
|
g
|
|
|
|
netcj (t
|
|
|
|
k)
|
|
|
|
k) @yu(t k 1)
|
|
|
|
1) @scj(t k 1)
|
|
|
|
= (30) = + tr 1:
|
|
|
|
TfohlleoIwn iwntrghasdtiegrfnoivlliaontwdivisce,asa:tne@s@yeyureirn(qtoju(ratk#likjt1)y()t)d8suuteaarttnosdth@@oynewueit(fnctajgc(kttbatk1hc))kat8auttr.cujn'scaotuetdpubta.ckWperorpedreepnlaeces by zero the
|
|
|
|
#j(t) := X wicj #i(t + 1) .
|
|
|
|
(31)
|
|
|
|
i
|
|
|
|
Following the de nitions/conventions of Section 3.1, we compute error ow for the truncated backprop learning rule. The error occurring at the output gate is
|
|
|
|
#outj (t)
|
|
|
|
tr
|
|
|
|
@youtj (t) @netoutj (t)
|
|
|
|
@ycj (t) @youtj (t)
|
|
|
|
#j(t)
|
|
|
|
.
|
|
|
|
(32)
|
|
|
|
The error occurring at the internal state is
|
|
|
|
Since therefore
|
|
|
|
we we
|
|
|
|
use get
|
|
|
|
trunc#astcejd(tb)a=ck@psr@ocjsp(ctjw(+te)1h)a#vsecj#(jt(+t)1=) +P@@iy,scicjjn((ott))g#atje(ta)nd.
|
|
|
|
no
|
|
|
|
memory
|
|
|
|
cell
|
|
|
|
wicj #i(t
|
|
|
|
(33) + 1);
|
|
|
|
@#j(t) @#scj (t +
|
|
|
|
1)
|
|
|
|
=
|
|
|
|
X
|
|
i
|
|
|
|
wicj
|
|
|
|
@#i(t + 1) @#scj (t + 1)
|
|
|
|
tr 0 .
|
|
|
|
(34)
|
|
|
|
28
|
|
|
|
The previous equations (33) and (34) imply constant error ow through internal states of memory cells:
|
|
|
|
@#scj @#scj (t
|
|
|
|
(t) + 1)
|
|
|
|
=
|
|
|
|
@scj (t @scj
|
|
|
|
+ 1) (t)
|
|
|
|
tr 1 .
|
|
|
|
(35)
|
|
|
|
The error occurring at the memory cell input is
|
|
|
|
#cj (t)
|
|
|
|
=
|
|
|
|
@g(netcj (t)) @netcj (t)
|
|
|
|
@scj (t) @g(netcj (t))
|
|
|
|
#scj
|
|
|
|
(t)
|
|
|
|
.
|
|
|
|
(36)
|
|
|
|
The error occurring at the input gate is
|
|
|
|
#inj (t)
|
|
|
|
tr
|
|
|
|
@yinj (t) @netinj (t)
|
|
|
|
@ scj (t) @yinj (t))
|
|
|
|
#scj
|
|
|
|
(t)
|
|
|
|
.
|
|
|
|
(37)
|
|
|
|
No external error ow. Errors are propagated back from units l to unit v along outgoing
|
|
|
|
connections nothing but
|
|
|
|
with weights wlv. external error) at
|
|
|
|
This time t
|
|
|
|
\external is
|
|
|
|
error"
|
|
|
|
(note
|
|
|
|
that
|
|
|
|
for
|
|
|
|
conventional
|
|
|
|
units
|
|
|
|
there
|
|
|
|
is
|
|
|
|
#ev(t)
|
|
|
|
=
|
|
|
|
@yv(t) @netv(t)
|
|
|
|
X
|
|
l
|
|
|
|
@netl(t + @yv(t)
|
|
|
|
1)
|
|
|
|
#l(t
|
|
|
|
+
|
|
|
|
1)
|
|
|
|
.
|
|
|
|
(38)
|
|
|
|
We obtain
|
|
|
|
@#ev(t 1)
|
|
|
|
@#j(t)
|
|
|
|
@yv(t 1) @netv(t 1)
|
|
|
|
@#outj (t) @#j(t)
|
|
|
|
@netoutj @yv(t
|
|
|
|
(t) 1)
|
|
|
|
+
|
|
|
|
@#inj (t) @#j(t)
|
|
|
|
@netinj @yv(t
|
|
|
|
(t) 1)
|
|
|
|
+
|
|
|
|
@#cj (t) @#j(t)
|
|
|
|
@netcj @yv(t
|
|
|
|
(t) 1)
|
|
|
|
= (39) tr 0 .
|
|
|
|
WexeteEornbrraseolrrcvoen:ontwehcetwieorinrtsohrtion#jimnajre;rmoivuoitnjrg;ycajct.etlhlse.
|
|
|
|
memory cell output is We now focus on the
|
|
|
|
not backpropagated to error back ow within
|
|
|
|
units v via a memory
|
|
|
|
cell's CEC. This is actually the only type of error ow that can bridge several time steps. Suppose
|
|
|
|
error #j(t) arrives at cj's or the memory cell input compute
|
|
|
|
output g(netcj
|
|
|
|
at time t and is ). It is scaled by
|
|
|
|
propagated a factor of
|
|
|
|
back for @#@v#(jt(t)q) ,
|
|
|
|
q steps until it where v = inj;
|
|
|
|
reaches cj. We
|
|
|
|
inj rst
|
|
|
|
@#scj (t q) @#j(t)
|
|
|
|
8<: tr
|
|
|
|
@s@csjc(jt(t q+q@@)1ys)cc@jj(#(tts))c@j#(jt(tq)+1)
|
|
|
|
q=0 q>0
|
|
|
|
.
|
|
|
|
(40)
|
|
|
|
Expanding equation (40), we obtain
|
|
|
|
@#v(t q) @#j(t)
|
|
|
|
tr
|
|
|
|
@#v(t @#scj (t @#v(t
|
|
|
|
q) @#scj (t q) q) @#j(t)
|
|
q) Y1 @scj(t
|
|
|
|
tr
|
|
m + 1)! @ycj(t)
|
|
|
|
@#scj (t q) m=q @scj(t m) @scj (t)
|
|
|
|
tr
|
|
|
|
(41)
|
|
|
|
youtj (t)h0(scj (t))
|
|
|
|
g(negt0(cjn(ettcj
|
|
|
|
(qt)fi0nqj)(yninejti(ntj
|
|
|
|
q) (t
|
|
|
|
q))
|
|
|
|
vv==icnjj
|
|
|
|
.
|
|
|
|
Consider the factors in the previous equation's last expression. Obviously, error ow is scaled only at times t (when it enters the cell) and t q (when it leaves the cell), but not in between (constant error ow through the CEC). We observe:
|
|
|
|
29
|
|
|
|
(1) The output gate's e ect is: youtj(t) scales down those errors that can be reduced early
|
|
|
|
during training without using the memory cell. Likewise, it scales down those errors resulting
|
|
|
|
from using (activating/deactivating) the memory cell at later training stages | without the output
|
|
|
|
gate, the memory cell might for instance suddenly start causing avoidable errors in situations that
|
|
|
|
already seemed under control (because it was easy to reduce the corresponding errors without
|
|
|
|
memory cells). See \output weight con ict" and \abuse problem" in Sections 3/4.
|
|
|
|
toinf jth(q(2e)s),emetIhfeSemtenhcoethrriye0o(nscacerj4lel('tlsaa)n)irndgmteenarpeynoxasbtlietspitvsoameitneoatrl)sl.cn(jeaRgcseaasucntaimvblleienfsgrccoojtmu(htna)tSteverahcmltuiiasoennsad(4elbodegtchibsaaytuticsneteshgisgeacmjtpivhoreeiadlcsy)is.dberSiiabfetseieiandSsgesvctianthliceuoeneintd4ipm.oueetDsgsrntiafeottpest
|
|
|
|
mfpionatjte(St3nioes)trmiaaymelilnousojgicf(gihtsnt.thiicecqsaf)iangacmctneodoroisdff )i0adn.brjHi(ofntvoseewtoeimnfvjaet(hyrt,estichnaqetl)eep)rondataroeelwnsstntimaaLltaeSlsliTsgicfnMjit.'hsceaonivnceperuaotlfl
|
|
|
|
gate is this is error
|
|
|
|
negatively biased (assume negligible compared to the ow, but not in a manner
|
|
|
|
that depends on the length of the time lag. The ow will still be much more e ective than an
|
|
|
|
exponentially (of order q) decaying ow without memory cells.
|
|
|
|
References
|
|
|
|
Almeida, L. B. (1987). A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In IEEE 1st International Conference on Neural Networks, San Diego, volume 2, pages 609{618.
|
|
Baldi, P. and Pineda, F. (1991). Contrastive learning and neural oscillator. Neural Computation, 3:526{545.
|
|
Bengio, Y. and Frasconi, P. (1994). Credit assignment through time: Alternatives to backpropagation. In Cowan, J. D., Tesauro, G., and Alspector, J., editors, Advances in Neural Information Processing Systems 6, pages 75{82. San Mateo, CA: Morgan Kaufmann.
|
|
Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient descent is di cult. IEEE Transactions on Neural Networks, 5(2):157{166.
|
|
Cleeremans, A., Servan-Schreiber, D., and McClelland, J. L. (1989). Finite-state automata and simple recurrent networks. Neural Computation, 1:372{381.
|
|
de Vries, B. and Principe, J. C. (1991). A theory for neural networks with time delays. In Lippmann, R. P., Moody, J. E., and Touretzky, D. S., editors, Advances in Neural Information Processing Systems 3, pages 162{168. San Mateo, CA: Morgan Kaufmann.
|
|
Doya, K. (1992). Bifurcations in the learning of recurrent neural networks. In Proceedings of 1992 IEEE International Symposium on Circuits and Systems, pages 2777{2780.
|
|
Doya, K. and Yoshizawa, S. (1989). Adaptive neural oscillator using continuous-time backpropagation learning. Neural Networks, 2:375{385.
|
|
Elman, J. L. (1988). Finding structure in time. Technical Report CRL Technical Report 8801, Center for Research in Language, University of California, San Diego.
|
|
Fahlman, S. E. (1991). The recurrent cascade-correlation learning algorithm. In Lippmann, R. P., Moody, J. E., and Touretzky, D. S., editors, Advances in Neural Information Processing Systems 3, pages 190{196. San Mateo, CA: Morgan Kaufmann.
|
|
Hochreiter, J. (1991). Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut fur Informatik, Lehrstuhl Prof. Brauer, Technische Universitat Munchen. See www7.informatik.tu-muenchen.de/~hochreit.
|
|
|
|
30
|
|
|
|
Hochreiter, S. and Schmidhuber, J. (1995). Long short-term memory. Technical Report FKI-20795, Fakultat fur Informatik, Technische Universitat Munchen.
|
|
Hochreiter, S. and Schmidhuber, J. (1996). Bridging long time lags by weight guessing and \Long Short-Term Memory". In Silva, F. L., Principe, J. C., and Almeida, L. B., editors, Spatiotemporal models in biological and arti cial systems, pages 65{72. IOS Press, Amsterdam, Netherlands. Serie: Frontiers in Arti cial Intelligence and Applications, Volume 37.
|
|
Hochreiter, S. and Schmidhuber, J. (1997). LSTM can solve hard long time lag problems. In Advances in Neural Information Processing Systems 9. MIT Press, Cambridge MA. Presented at NIPS 96.
|
|
Lang, K., Waibel, A., and Hinton, G. E. (1990). A time-delay neural network architecture for isolated word recognition. Neural Networks, 3:23{43.
|
|
Miller, C. B. and Giles, C. L. (1993). Experimental comparison of the e ect of order in recurrent neural networks. International Journal of Pattern Recognition and Arti cial Intelligence, 7(4):849{872.
|
|
Mozer, M. C. (1989). A focused back-propagation algorithm for temporal sequence recognition. Complex Systems, 3:349{381.
|
|
Mozer, M. C. (1992). Induction of multiscale temporal structure. In Lippman, D. S., Moody, J. E., and Touretzky, D. S., editors, Advances in Neural Information Processing Systems 4, pages 275{282. San Mateo, CA: Morgan Kaufmann.
|
|
Pearlmutter, B. A. (1989). Learning state space trajectories in recurrent neural networks. Neural Computation, 1(2):263{269.
|
|
Pearlmutter, B. A. (1995). Gradient calculations for dynamic recurrent neural networks: A survey. IEEE Transactions on Neural Networks, 6(5):1212{1228.
|
|
Pineda, F. J. (1987). Generalization of back-propagation to recurrent neural networks. Physical Review Letters, 19(59):2229{2232.
|
|
Pineda, F. J. (1988). Dynamics and architecture for neural computation. Journal of Complexity, 4:216{245.
|
|
Plate, T. A. (1993). Holographic recurrent networks. In S. J. Hanson, J. D. C. and Giles, C. L., editors, Advances in Neural Information Processing Systems 5, pages 34{41. San Mateo, CA: Morgan Kaufmann.
|
|
Pollack, J. B. (1991). Language induction by phase transition in dynamical recognizers. In Lippmann, R. P., Moody, J. E., and Touretzky, D. S., editors, Advances in Neural Information Processing Systems 3, pages 619{626. San Mateo, CA: Morgan Kaufmann.
|
|
Puskorius, G. V. and Feldkamp, L. A. (1994). Neurocontrol of nonlinear dynamical systems with Kalman lter trained recurrent networks. IEEE Transactions on Neural Networks, 5(2):279{ 297.
|
|
Ring, M. B. (1993). Learning sequential tasks by incrementally adding higher orders. In S. J. Hanson, J. D. C. and Giles, C. L., editors, Advances in Neural Information Processing Systems 5, pages 115{122. Morgan Kaufmann.
|
|
Robinson, A. J. and Fallside, F. (1987). The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department.
|
|
Schmidhuber, J. (1989). The Neural Bucket Brigade: A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403{412.
|
|
31
|
|
|
|
Schmidhuber, J. (1992a). A xed size storage O(n3) time complexity learning algorithm for fully recurrent continually running networks. Neural Computation, 4(2):243{248.
|
|
Schmidhuber, J. (1992b). Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234{242.
|
|
Schmidhuber, J. (1992c). Learning unambiguous reduced sequence descriptions. In Moody, J. E., Hanson, S. J., and Lippman, R. P., editors, Advances in Neural Information Processing Systems 4, pages 291{298. San Mateo, CA: Morgan Kaufmann.
|
|
Schmidhuber, J. (1993). Netzwerkarchitekturen, Zielfunktionen und Kettenregel. Habilitationsschrift, Institut fur Informatik, Technische Universitat Munchen.
|
|
Schmidhuber, J. and Hochreiter, S. (1996). Guessing can outperform many long time lag algorithms. Technical Report IDSIA-19-96, IDSIA.
|
|
Silva, G. X., Amaral, J. D., Langlois, T., and Almeida, L. B. (1996). Faster training of recurrent networks. In Silva, F. L., Principe, J. C., and Almeida, L. B., editors, Spatiotemporal models in biological and arti cial systems, pages 168{175. IOS Press, Amsterdam, Netherlands. Serie: Frontiers in Arti cial Intelligence and Applications, Volume 37.
|
|
Smith, A. W. and Zipser, D. (1989). Learning sequential structures with the real-time recurrent learning algorithm. International Journal of Neural Systems, 1(2):125{131.
|
|
Sun, G., Chen, H., and Lee, Y. (1993). Time warping invariant neural networks. In S. J. Hanson, J. D. C. and Giles, C. L., editors, Advances in Neural Information Processing Systems 5, pages 180{187. San Mateo, CA: Morgan Kaufmann.
|
|
Watrous, R. L. and Kuhn, G. M. (1992). Induction of nite-state languages using second-order recurrent networks. Neural Computation, 4:406{414.
|
|
Werbos, P. J. (1988). Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1.
|
|
Williams, R. J. (1989). Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27, Boston: Northeastern University, College of Computer Science.
|
|
Williams, R. J. and Peng, J. (1990). An e cient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Computation, 4:491{501.
|
|
Williams, R. J. and Zipser, D. (1992). Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum.
|
|
32
|
|
|