Merge 92a7f0e6b8 into 7dfa55926b
This commit is contained in:
commit
84fded1c48
|
|
@ -260,6 +260,7 @@ When using cloud-based AI services, the data you input is often collected and st
|
|||
- [llama.cpp](https://github.com/ggml-org/llama.cpp) - Inference of Facebook's LLaMA model in pure C/C++ so it can run locally on a CPU.
|
||||
- [LocalAI](https://github.com/go-skynet/LocalAI) - Self-hosted, community-driven simple local OpenAI-compatible API written in go. Can be used as a drop-in replacement for OpenAI, running on CPU with consumer-grade hardware.
|
||||
- [ollama](https://github.com/jmorganca/ollama) - Get up and running with Llama 2 and other large language models locally.
|
||||
- [OpenObscure](https://github.com/openobscure/openobscure) - On-device privacy firewall for AI agents. Encrypts PII with FF1 format-preserving encryption before it reaches the LLM, with image redaction, voice PII detection, and cognitive firewall for response manipulation.
|
||||
- [PasteGuard](https://github.com/sgasser/pasteguard) - Privacy proxy for LLM APIs that masks PII and secrets before they reach cloud providers. Self-hosted, OpenAI-compatible, and restores original data in responses.
|
||||
- [Shimmy](https://github.com/Michael-A-Kuykendall/shimmy) - Privacy-focused AI inference server with OpenAI API compatibility, zero cloud dependencies, and local model processing.
|
||||
- [Tinfoil](https://tinfoil.sh/) - Verifiably private AI Chat and OpenAI-compatible inference in the cloud. Uses NVIDIA confidential computing and open source code pinned to a transparency log for end-to-end verifiability.
|
||||
|
|
|
|||
Loading…
Reference in New Issue