Will Enigma be able to allow back propagation and error propogation to enable Reinforcement Learning for Artificial intelligence?
There’s no reason it shouldn’t be able to, though depending how it’s done it might be rather computationally expensive–however, there do exist some pretty strong architectures such as Openmined that could be ported over.
Would it be beneficial to a project like Openmined to build off of Enigma or would it be more cost effective for them to build their own sMPC network? I couldn’t confirm by reading their website whether or not they were going to establish their own sMPC network or if they were going to port their software over an already established network.
As of now it looks like they have yet to build their MPC/FHE stuff–could be a prime opportunity for eng to slot in?
I’m not sure how many nodes are needed for full sMPC but if Enigma had someone using that infrastructure for something like Openmined’s project, that would be a continuous use of the PoS MNs and would incentivize others to establish more nodes. This could be really useful for social media platforms and governments.
This would indeed be a very interesting collaboration. We are friends but haven’t spoken for a while. OpenMined is interested in using GPUs for computations. I believe they are using C-Sharp, so it will be challenging to collaborate in the next couple of releases where the Enigma network will have EVM and WASM options.
Reinforcing (no pun intended) the above - there’s no reason why this shouldn’t be possible. It would potentially be computationally expensive, and for this reason I believe having both TEE/MPC powered implementations is compelling for developers (which is what we’re working on).
Re: number of nodes needed in an MPC network - two or above works, but like any decentralized network you want more participants for added security and resiliency.
Based on the responses from you and Leor, would it be economically feasible for an organization to use Enigma infrastructure for Machine Learning? I know that’s really a multifaceted question, so if it can’t be determined with the amount of information at hand, I totally understand.
With TEEs? Yes. With MPC? It depends on the scale/latency/type of ML they need, but definitely for some loads/use-cases.