I did not use claude code, but codex, and I am fetching space weather from NOAA SWPc, trajectory, distance, speed, and comms delay are computed from NASA's published Artemis II mission plan parameters, not pulled live from NASA telemetry. Also, the current discrepancy is likely caused by the orbital phase and reference model being used. tracker shows about 192,000 km, while NASA's AROW shows about 80,000 miles, which is roughly 129,000 km. it is off by around 60,000 km.
difference can happen because the spacecraft is in a elliptical orbit and different trackers may be using different assumptions, interpolation methods,... or reference points for the trajectory
Hey, I've responded before, I need to update the visualization: Iam fetching space weather from NOAA SWPC.
Trajectory, distance, speed, and comms delay are computed from NASA's published Artemis II mission plan parameters, not pulled live from NASA telemetry.
Also, the current discrepancy is likely caused by the orbital phase and reference model being used.
Right now the tracker shows about 192,000 km, while NASA's AROW shows about 80,000 miles, which is roughly 129,000 km. So yes, that is off by around 60,000 km. Difference can happen because the spacecraft is in a elliptical orbit and different trackers may be using different assumptions, interpolation methods,... or reference points for the trajectory.
Trajectory, distance, speed, and comms delay are computed from NASA’s published Artemis II mission plan parameters, not pulled live from NASA telemetry.
Also, the current discrepancy is likely caused by the orbital phase and reference model being used. Right now the tracker shows about 192,000 km, while NASA’s AROW shows about 80,000 miles, which is roughly 129,000 km. So yes, that is off by around 60,000 km.
difference can happen because the spacecraft is in a elliptical orbit and different trackers may be using different assumptions, interpolation methods, .. or reference points for the trajectory.
In theory, it seemed perfect for flexible manufacturing: same machine, same material, endless outputs. But in practice, it hit limits in speed, material properties, and post-processing. You still can’t print a high-tolerance metal part at scale and cost-effectively replace traditional machining. It’s amazing for prototyping or niche parts
Hey, that is a very good question, I have answered that before. I hope you don't mind, if I simply copy paste my previous answer:
Technically you can use the original Codex CLI with a local LLM - if your inference provider implements the OpenAI Chat Completions API, with function calling, etc. included.
But based on what I had in mind - the idea that small models can be really useful if optimized for very specific use cases - I figured the current architecture of Codex CLI wasn't the best fit for that. So instead of forking it, I started from scratch.
Here's the rough thinking behind it:
1. You still have to manually set up and run your own inference server (e.g., with ollama, lmstudio, vllm, etc.).
2. You need to ensure that the model you choose works well with Codex's pre-defined prompt setup and configuration.
3. Prompting patterns for small open-source models (like phi-4-mini) often need to be very different - they don't generalize as well.
4. The function calling format (or structured output) might not even be supported by your local inference provider.
Codex CLI's implementation and prompts seem tailored for a specific class of hosted, large-scale models (e.g. GPT, Gemini, Grok). But if you want to get good results with small, local models, everything - prompting, reasoning chains, output structure - often needs to be different.
So I built this with a few assumptions in mind:
- Write the tool specifically to run _locally_ out of the box, no inference API server required.
- Use model directly (currently for phi-4-mini via llama-cpp-python).
- Optimize the prompt and execution logic _per model_ to get the best performance.
Instead of forcing small models into a system meant for large, general-purpose APIs, I wanted to explore a local-first, model-specific alternative that's easy to install and extend — and free to run.
Thanks for bringing that up - it's exactly why I approached it this way from the start.
Technically you can use the original Codex CLI with a local LLM - if your inference provider implements the OpenAI Chat Completions API, with function calling, etc. included.
But based on what I had in mind - the idea that small models can be really useful if optimized for very specific use cases - I figured the current architecture of Codex CLI wasn't the best fit for that. So instead of forking it, I started from scratch.
Here's the rough thinking behind it:
1. You still have to manually set up and run your own inference server (e.g., with ollama, lmstudio, vllm, etc.).
2. You need to ensure that the model you choose works well with Codex's pre-defined prompt setup and configuration.
3. Prompting patterns for small open-source models (like phi-4-mini) often need to be very different - they don't generalize as well.
4. The function calling format (or structured output) might not even be supported by your local inference provider.
Codex CLI's implementation and prompts seem tailored for a specific class of hosted, large-scale models (e.g. GPT, Gemini, Grok). But if you want to get good results with small, local models, everything - prompting, reasoning chains, output structure - often needs to be different.
So I built this with a few assumptions in mind:
- Write the tool specifically to run _locally_ out of the box, no inference API server required.
- Use model directly (currently for phi-4-mini via llama-cpp-python).
- Optimize the prompt and execution logic _per model_ to get the best performance.
Instead of forcing small models into a system meant for large, general-purpose APIs, I wanted to explore a local-first, model-specific alternative that's easy to install and extend — and free to run.
That’s one of the reasons I went with phi-4-mini - surprisingly high quality for its size and speed. It handled multi-step reasoning, math, structured data extraction, and code pretty well, all on modest hardware. Phi-1.5 / Phi-2 (quantized versions) also run on raspberry pi as others have demonstrated.
OpenAI rejected the request. Error details: Status: 400, Code: unknown, Type: api_error, Message: 400
registry.ollama.ai/library/phi4:latest does not support tools. Please verify your settings and try again.
reply