The ability to remember previous tasks while concomitantly building upon prior knowledge to acquire new skills is a vital aspect of human learning. Artificial Intelligence algorithms, however, have traditionally struggled in this respect, mastering a specific task but being unable to transfer this knowledge to new tasks or forgetting previous skills when learning new ones. We examine ways in which the weights in neural networks can be constrained to ensure that the networks can acquire new skills and also remember previous tasks, allowing them to learn multiple tasks sequentially. We test our approach by building a network that learns how to play two different Atari 2600 games. Although some approaches lead to varying amounts of forgetting, others yield successful results that even beat human performance on some games.