@beniz I just prefer not to be locked into CUDA and the
AMD cards are also a decent bit cheaper which is nice also.
Plus, two big things coming in that make me want to keep the possibilities open for AMD card usage.
1) Native FP16 support on Polaris consumer cards. Although Pascal supports native FP16, it's crippled on all consumer cards, including the Titan X. Although there's no compute acceleration, it effectively doubles memory size and bandwidth.
2) Vega cards coming in Q42016/Q2017 are rumored to support FP16 as well and there's a decent chance they might support 2:1 compute acceleration. More importantly however, it is confirmed that they will be using HBM2, which gives a very large bandwidth boost (up to 1TB/s) which would allow us to get work done quite a bit faster.
@beniz Actually, DD is probably the biggest thing I need. The ability to set up a server and then work from anywhere I have internet access with my laptop is very big and one of the reasons I started using DD. Also, I'm thinking about creating a web demo with DD that anyone can use, DD is the only open source deep learning REST API I could find.