How I Built a Multi-OS AI Rig to Auto-Optimize My Python Code

As a developer, I’ve always been obsessed with the “feedback loop.” How fast can I write code, test it, find the bottlenecks, and fix them? Usually, this is a manual, brain-draining process. But recently, I decided to build something a bit more… “over-engineered.”

I’ve built a cross-platform optimization rig that uses a Mac workstation for development and a high-powered Windows server running Ollama to physically rewrite and optimize my Python scripts while I watch.

If you’re into automation, LLMs, or just want to see how to make two different operating systems work together to make you a better coder, here is how I did it.


The “Why”: Why Two Machines?

My daily workstation is a Mac. It’s where I have my IDEs, my terminal, and my workflow. However, running large-scale LLMs (like qwen3-coder-next) locally can turn a laptop into a space heater very quickly.

I had a beefy Windows machine sitting in the corner with a powerful GPU. By installing Ollama on that Windows server, I can offload the “heavy thinking” to the server while my Mac handles the orchestration.


The “Brain”: My Python Optimizer Script

The script I wrote (which I’m calling the Astropy Code Optimizer v2.9) is the glue. It lives on my Mac and performs a sophisticated “dance” with the Windows server. Here is the workflow I designed:

1. The Extraction Phase

I don’t just send the whole file to the AI. That’s messy. My script uses the ast (Abstract Syntax Tree) module to surgically extract a specific function. For example, if I’m working on a complex astronomy calculation like get_altaz, the script finds exactly where it starts and ends.

It also detects dependencies. It looks for astropy, numpy, or standard library imports. This is crucial because the AI needs to know what tools are available to optimize the code effectively.

2. The Sandbox Environment

Before the AI touches anything, I need a “truth.” My script automatically generates a temp_target_function.py. This isn’t just a copy; it’s a full test harness. It generates dummy data, sets up real-world observatory locations (like Greenwich or Mauna Kea), and prepares the function to be benchmarked.

3. The AI Loop (The Windows Handshake)

This is where it gets cool. My Mac sends a request over the local network to 192.168.0.162 (my Windows IP).

I use the following logic to talk to the server:

  • Prompting: I tell the AI: “You are an expert Python optimizer. Here is a function and its dependencies. Make it faster, more Pythonic, and keep the logic identical.”

  • Validation: When the Windows server sends back the new code, the Mac script immediately checks it for syntax errors using ast.parse. If the AI hallucinated or forgot an indent, my script catches it.

4. Competitive Testing with Pytest

Optimization is useless if the code is fast but wrong. My rig automatically generates test_astronomy.py and runs it using pytest.

  • Does the new function still return the right coordinates?

  • Is the altitude between -90 and 90?

  • Is it actually faster than the version I wrote?

Only if the tests pass and the execution time is lower does the script consider the round a success.

The Stack

If you want to replicate this rig, here is what I’m using:

Component Technology
Dev Machine MacBook Pro (Orchestrator)
AI Server Windows 11 with NVIDIA GPU
LLM Engine Ollama (Running qwen3-coder-next)
Communication Python requests (REST API)
Testing pytest & ast module

What I Learned

The most surprising part of this project was seeing how much the “Context” matters. By detecting that I was using the astropy library, my script could tell the AI: “Hey, use vectorized NumPy operations instead of loops.”

I’ve found that the AI on the Windows server is remarkably good at identifying “lazy” Python—places where I used a list comprehension where a generator would be better, or where I’m recalculating a constant inside a loop.

Conclusion

I no longer spend hours refactoring math-heavy functions. I just point my script at a target, set OPTIMIZATION_ROUNDS = 10, and let my Windows server “think” while I grab a coffee. When I come back, I usually have a file named optimized_astronomy.py that runs 20-30% faster than my first draft.

================================================

ASTROPY CODE OPTIMIZER
Optimizer version 2.9 FINAL
================================================
Target file: astronomy_program.py
Target function: get_altaz
Optimization rounds: 10
Ollama server: 192.168.0.162:11434

================================================

📋 Checking Requirements…

================================================
✅ Pytest is installed
✅ NumPy is installed
✅ Astropy is installed
✅ Requests is installed
✅ Connected to Ollama at 192.168.0.162

📂 EXTRACTING FUNCTION

================================================
📦 Found ‘get_altaz’ (lines 635-642)
📚 Astropy imports detected: 3
📚 Standard library imports: 8
🌌 Astropy objects detected: 5

🔧 CREATING TEST ENVIRONMENT

================================================

🔍 RUNNING VERIFICATION

================================================

🔍 VERIFYING: get_altaz

================================================

📊 Test Cases:
—————————————-

Test Case 1:
time = 2024-01-15T12:00:00
planet_name = mars
location = None
Altitude: 13.37°
Azimuth: 194.71°
✅ Valid coordinate

Test Case 2:
time = 2024-01-15T12:00:00
planet_name = mars
location = Greenwich
Altitude: 13.41°
Azimuth: 194.59°
✅ Valid coordinate

Test Case 3:
time = 2024-01-15T12:00:00
planet_name = mars
location = London
Altitude: 13.41°
Azimuth: 194.47°
✅ Valid coordinate

Test Case 4:
time = 2024-01-15T12:00:00
planet_name = mars
location = MaunaKea
Altitude: -52.71°
Azimuth: 103.70°
✅ Valid coordinate

Test Case 5:
time = 2024-01-15T12:00:00
planet_name = mars
location = Paranal
Altitude: 40.32°
Azimuth: 101.24°
✅ Valid coordinate

================================================
✅ All verification tests passed!

================================================

TESTING WITH REAL OBSERVATORIES

================================================

📅 Time: 2024-01-15T12:00:00.000
—————————————-

📍 Greenwich:
mars : Alt= 13.41°, Az=194.59°
jupiter : Alt= 3.86°, Az= 74.77°

📍 London:
mars : Alt= 13.41°, Az=194.47°
jupiter : Alt= 3.79°, Az= 74.68°

📍 MaunaKea:
mars : Alt=-52.71°, Az=103.70°
jupiter : Alt= -9.40°, Az=287.00°

📍 Paranal:
mars : Alt= 40.32°, Az=101.24°
jupiter : Alt=-74.61°, Az=140.38°

📅 Time: 2024-01-15T20:00:00.000
—————————————-

📍 Greenwich:
mars : Alt=-46.34°, Az=292.05°
jupiter : Alt= 47.55°, Az=210.75°

📍 London:
mars : Alt=-46.26°, Az=291.95°
jupiter : Alt= 47.57°, Az=210.56°

📍 MaunaKea:
mars : Alt= 42.21°, Az=155.18°
jupiter : Alt=-35.00°, Az= 57.81°

📍 Paranal:
mars : Alt= 31.16°, Az=255.94°
jupiter : Alt= 28.94°, Az= 58.32°

📅 Time: 2024-06-21T12:00:00.000
—————————————-

📍 Greenwich:
mars : Alt= 33.11°, Az=248.26°
jupiter : Alt= 53.18°, Az=222.81°

📍 London:
mars : Alt= 33.17°, Az=248.12°
jupiter : Alt= 53.21°, Az=222.61°

📍 MaunaKea:
mars : Alt= -6.64°, Az= 72.99°
jupiter : Alt=-26.26°, Az= 53.44°

📍 Paranal:
mars : Alt= 48.29°, Az= 25.67°
jupiter : Alt= 27.31°, Az= 47.69°

🔍 ANALYZING FUNCTION

================================================

📊 Function Analysis:
Name: get_altaz
Parameters: 3
– time
– planet_name
– location = None

🧪 GENERATING TESTS

================================================
✅ Generated tests in test_astronomy.py

✅ TESTING ORIGINAL FUNCTION

================================================

⏱️ BASELINE PERFORMANCE: 0.192583 seconds

🌙 STARTING 10 OPTIMIZATION ROUNDS

================================================

🔄 ROUND 1/10
—————————————-
🤔 Thinking… Done!
🧪 Running tests… Done!
✅ SUCCESS: 0.013107s (improved by 93.2%)

🔄 ROUND 2/10
—————————————-
🤔 Thinking… Done!
🧪 Running tests… Done!
✅ SUCCESS: 0.012943s (improved by 1.3%)

🔄 ROUND 3/10
—————————————-
🤔 Thinking… Done!
🧪 Running tests… Done!
✅ SUCCESS: 0.012555s (improved by 3.0%)

🔄 ROUND 4/10
—————————————-
🤔 Thinking… Done!
🧪 Running tests… Done!
✅ SUCCESS: 0.012514s (improved by 0.3%)

🔄 ROUND 5/10
—————————————-
🤔 Thinking… Done!
🧪 Running tests… Done!
✅ SUCCESS: 0.012465s (improved by 0.4%)

🔄 ROUND 6/10
—————————————-
🤔 Thinking…

If you’re an IT pro with an extra machine lying around, stop letting that GPU sit idle. Build a bridge between your machines and let the AI do the tedious work for you.

Leave a comment

Your email address will not be published. Required fields are marked *