AWS plans to deploy Cerebras' Wafer-Scale Engine chip for AI inference functions; AWS will still offer slower, cheaper computing using its Trainium processors

Amazon Web Services says the partnership will allow it to offer lightning-fast inference computing