When training a deep learning model, the dataset used is of great importance to make sure that the model learns relevant features of the data and that it will be able to generalize to new data. However, it is typically difficult to produce a dataset without some bias toward any specific feature. Deep learning models used in histopathology have a tendency to overfit to the stain appearance of the training data - if the model is trained on data from one lab only, it will usually not be able to generalize to data from other labs. The standard technique to overcome this problem is to use color augmentation of the training data which, artificially, generates more variations for the network to learn. In this work we instead test the use of a so called domain-adversarial neural network, which is designed to prevent the model from being biased towards features that in reality are irrelevant such as the origin of an image. To test the technique, four datasets from different hospitals for Gleason grading of prostate cancer are used. We achieve state of the art results for these particular datasets, and furthermore for two of our three test datasets the approach outperforms the use of color augmentation.