The basis for the implementations is the reconfigurable hardware accelerator RAPTOR2000,
which is based on FPGAs. The investigated neural network architectures are neural
associative memories, self-organizing feature maps and basis function networks. Some of
the key implementation issues are considered. In particular, the resource efficiency and
performance of the presented realizations are discussed.