The process of me making an app 4

Category

Compressing Code Files

I discovered a serious problem: since my app is developed using H5 with the Capacitor framework, when the app is packaged, the source code is not compiled into binary. Users can directly view the source code when inspecting the package contents. I don't want to open-source this app, so I decided to find a way to prevent users from seeing my original source code. However, JavaScript, being a frontend language, cannot be directly compiled into binary. So I thought of obfuscating my JS files—for example, replacing all variable and function names with simple ones like a, b, and compressing thousands of lines of code into a single line. This not only increases the difficulty for others to understand the code, but also significantly reduces the file size after removing comments and complex naming. It also greatly reduces the time it takes to load the JS code.

However, I encountered another problem: since there's no way to restore the code to exactly the same state after compression, how do I recover it? My solution was to write a script that automatically moves the current source code to a temporary folder before building, then compresses the files in the main folder. After compression is complete, the build process runs, and after the build is finished, the files from the temporary folder replace the compressed code in the main folder. The main implementation code is:

"build": "npm run copy-ionicons && npm run exclude-backend && npm run obfuscate-all && npx cap sync && npm run restore-backend && npm run restore-assets",
"build:android": "npm run copy-ionicons && npm run exclude-backend && npm run obfuscate-all && npx cap sync android && npm run restore-backend && npm run restore-assets && npx cap build android",
"build:ios": "npm run copy-ionicons && npm run exclude-backend && npm run obfuscate-all && npx cap sync ios && npm run restore-backend && npm run restore-assets && npx cap build ios"

My logic for JS compression is:

const result = await minify(code, {
      compress: {
        drop_console: false,
        drop_debugger: true,
        pure_funcs: [],
        passes: 2,
      },
      mangle: {
        toplevel: false,
        properties: false,
        reserved: ['Capacitor', 'window', 'document', 'navigator'],
      },
      format: {
        comments: false,
      },
      sourceMap: false,
    });

Additionally, besides JS, I also applied similar obfuscation principles to HTML and CSS.

Now, while I can't say my app is impossible to crack (even compiled binaries can be reverse-engineered), I have significantly increased the difficulty for others to view my source code. Moreover, this also optimized the code execution speed.

Making the AI Assistant More Intelligent

Recently, I discovered that the AI module in my app was a bit "low-intelligence." It didn't know today's date, so it couldn't answer questions like "Did I check in today?" Its understanding of context was also poor—it would often forget what was said earlier in the conversation. Additionally, since I was using DeepSeek's regular model (which lacks reasoning capabilities), some of its suggestions weren't accurate enough.

The first problem was relatively easy to solve. I directly added the current date dynamically to the system prompt, such as "Today is 2026-01-11," so the AI would no longer be "time-blind." I also included instructions in the prompt on how to respond when users ask questions like "Did I check in today?"—it should check the database records and answer accordingly.

The second problem initially puzzled me. I had already designed a context management system for the AI: each user had a session stored in the server's runtime memory, and each request would concatenate the entire session into the prompt sent to the AI. This mechanism should have worked, but in practice, it was very unstable. Sometimes the AI wouldn't remember anything, sometimes it would remember the previous sentence but forget the next one—it seemed like the model was randomly "amnesiac."

Later, I added logging to the backend to print out the actual prompts sent to the AI each time, and I discovered that the root cause wasn't in the model but on my side. After a few rounds of conversation, the content sent to the AI would only contain the user's current message—all previous dialogue had vanished. Since runtime memory itself is unreliable, I changed my approach: I stored all conversations in the database in JSON format, and retrieved the complete conversation history from the database each time before sending it to the AI. After this change, the context immediately became stable, and the AI stopped mysteriously forgetting what was said before.

Since conversations were now stored in the database, I naturally added another feature: allowing users to review their conversation history. I added userId and timestamp fields to the table, along with indexes, so queries by user and time would be efficient. The feature came together naturally.

As for the last problem, I checked DeepSeek's API documentation and found that it also offers a Reasoner model. This model is significantly stronger in logical analysis and complex problems, so I tried integrating it. However, I quickly discovered that while the Reasoner is smarter, its response speed is noticeably slower, making it unsuitable for all scenarios. So I added a toggle in the interface, allowing users to choose between the regular model and the Reasoner model: use the regular model for speed, switch to Reasoner for more rigorous analysis.

Now, this AI module not only knows what date it is and won't randomly forget things, but it also has an optional "thinking brain."