Uncanny AI considers how trust can be engendered in users when Artificial Intelligence (AI) is used to provide services directly to users. It builds upone the notion of the Turing Red Flag Law whereby the operation of AI systems would be required to be designed to be legible to users. Whilst such a law would appear rather a blunt instrument it does highlight a need for research that explores what legible (as opposed too transparent) AI systems would be. Through the creation of a series of speculative design artefacts which relate to a variety of contexts of use and potential user groups, the project explore how AI is understood and what approaches can be used to make the different activities within AI more legible.