top of page
3104896-01.jpg

How We Ensure Your Data Privacy at MindOS

The popularity of Large Language Models (LLMs) has caused countless systems to appear built upon their techniques. The amount of data involved in using this technology presents several privacy challenges, though. For this reason, AI companies need to step up to the challenge of handling the safety and security of their users.
Here at MindOS, the company behind Mebot and MindOS Studio, we have prioritized user privacy above all else. To aid in this, we have developed a unique, native privacy framework for large models. This ensures that while we provide the best possible service to users such as yourself, we also safeguard your privacy with the utmost care.
This innovative approach to privacy is part of what sets MindOS, our product, apart. Read on to learn how we leverage both our own unique security technology as well as elements of other widely-used LLMs. Using these, we continue to develop MindOS in a way that ensures we handle any data you send us with care and confidentiality. 

Keeping Your Data Secure

During the development of MindOS, we have developed our own user security and privacy system. Its purpose is to offer comprehensive protection when it comes to your data by taking the following tenets into account: 

1. MindOS Will Not Train AI With Your Data Without Permission

If you do not tell us we can use your personal data to improve your experience, we will not do so.
We understand this may impact how fast our system can provide a unique and personalized experience. As such, our system sometimes needs to make more assumptions early on as it learns your habits and goals. Regardless, this step is imperative if we want to adhere to the principles of user privacy protection.

2. MindOS Will Encrypt All User Data During Transfer and Storage

Encryption is one of several data handling methods we use to keep everything you say away from prying eyes.
We split our encryption efforts into two main umbrellas that help fulfill this obligation:


Transfer Encryption: We encrypt all communication between your MindOS app client and our cloud systems. This step prevents third parties from accessing the messages you send or receive.
Cloud Encryption: We encrypt every byte we receive from you while it is in the cloud using the most advanced algorithms available to us. It is only decrypted when we need to process it and provide you with a useful experience.


To this end, when handling your personal information, we use hardware-based security. Examples of such security features include Trusted Execution Environments (TEEs). These make sure we maintain a high level of data security at all times.

3. MindOS Ensures You Have Complete Control Over Your Own Data

We have designed a permissions system that enables you to control access to your data with a high degree of precision. Offering flexible data management options like these allows you to set access according to your unique needs. This not only secures your data to the level that is best for you but also ensures you always know what your security options are.

4. MindOS Continues to Protect Your Data As We Use It

Last of all, we have developed what we are calling the "Pre-LLM Privacy Gating Layer" (PPGL). 
Such an extra security step is essential for applications we develop on top of large models. This is because it ensures user data remains secure when used in any of MindOS's other internal systems. It also prevents privacy breaches by LLM service providers we might work with now or in the future.
Before sending information to other systems, we perform comprehensive privacy processing. The PPGL filters out data that does not meet privacy standards, replacing it with anonymized content before we send it out. We then translate it back to relevant information before we give the results back to you.
The following diagram summarizes how we have planned our security process:

安全.jpg

MindOS offers robust AI data protection through several layers of security

Click to expand image

MindOS's Commitment

Protecting user privacy in the era of large models is not an easy task. Still, we commit to exploring the most secure and efficient privacy protection options available. Furthermore, we aim to do this while continuing with our goal of offering a uniquely personalized AI experience
We understand why we need to ensure that the services you use match your unique needs and what that also means in an increasingly diverse world. So, journey with us into this new era of AI and check out the products that uphold the privacy protection you deserve.

bottom of page