3D model creation forms a large part of the development process in 3D graphical environments such as games or simulations. If an unsupervised approach can be used to generate high-quality textured models the turnaround in these areas could be greatly improved. Advances in generative deep learning have been shown to understand even complex 3D structures, allowing neural networks to output generations learned from abundant model data. But there are no methods that incorporate colour channels into these techniques, an important factor when attempting to use the generations in an immersive environment. Proposed in this paper is an advancement on the initial voxel-based 3D generative adversarial network (GAN) learning to include colour within the output generated samples through adapting the channels of voxel inputs. Followed by the application of marching cubes to translate the voxel-based models into a naive coloured mesh. The method uses unsupervised learning but requires a target 3D textured model data set. The techniques shown in this paper were tested on a sparse collection of model inputs from a set of open access textured models. The method was tested on a data set of 24 variant models of fish. The outputs from the trained generative model in this paper show promising results, learning the shape and a variety of unique texture patterns.