tabs
- OneTab
- MySQL 和 PostgreSQL 创建用户对比 - 豆包
- ID重复问题修复
- Docker 部署 Elasticsearch 8.10.3 踩坑指南
- Docker 最佳实战:Docker 部署单节点 ElasticSearch 实战-腾讯云开发者社区-腾讯云
- 菜鸟教程 - 学的不仅是技术,更是梦想!
- OneTab
- MySQL 和 PostgreSQL 创建用户对比 - 豆包
- ID重复问题修复
- Docker 部署 Elasticsearch 8.10.3 踩坑指南
- Docker 最佳实战:Docker 部署单节点 ElasticSearch 实战-腾讯云开发者社区-腾讯云
- 菜鸟教程 - 学的不仅是技术,更是梦想!
- OneTab
- MySQL 和 PostgreSQL 创建用户对比 - 豆包
- ID重复问题修复
- Docker 部署 Elasticsearch 8.10.3 踩坑指南
- Docker 最佳实战:Docker 部署单节点 ElasticSearch 实战-腾讯云开发者社区-腾讯云
- 菜鸟教程 - 学的不仅是技术,更是梦想!
- OneTab
- MySQL 和 PostgreSQL 创建用户对比 - 豆包
- ID重复问题修复
- Docker 部署 Elasticsearch 8.10.3 踩坑指南
- Docker 最佳实战:Docker 部署单节点 ElasticSearch 实战-腾讯云开发者社区-腾讯云
- 菜鸟教程 - 学的不仅是技术,更是梦想!
- GitHub
- OneTab
- vllm - Dify
- Python 模型加载错误 - 豆包
- Intel B580 GPU 大模型容器推理实践:以 DeepSeek R1 Distill Qwen 7B 为例(一) - 知乎
- Plasma 6 compatibility · Issue #73 · Zren/material-decoration
- V2EX › 登录
- Linux 定时任务添加 - V2EX
- 退出上游 Linux 内核开发 - Google Search
- 大龄前端转 Java ,现在 Spring boot 学完了,微服务现在只是会用,原理不懂,也做了一个很简单的一个小项目,可以直接去面试了吗,还需要学其他的吗,现在 Java 面试都问啥啊,还需要准备哪些方面啊,谢谢各位大佬 - V2EX
- 黑客新闻 --- Hacker News
- Understanding Reasoning LLMs - by Sebastian Raschka, PhD
- Qwen/Qwen2.5-7B-Instruct-GGUF · why do you have split gguf's? ' invalid magic characters'
- 故障排除 - VLLM --- Troubleshooting — vLLM
- 与HuggingFace集成-VLLM --- Integration with HuggingFace — vLLM
- 释放·vllm-project/vllm --- Releases · vllm-project/vllm
- Releases · vllm-project/vllm
- Qwen2.5 - a Qwen Collection
- Qwen (Qwen)
- Qwen2.5 - a Qwen Collection
- Qwen/Qwen2.5-0.5B-Instruct-GGUF at main
- Qwen2.5 - a Qwen Collection
- q2_k vllm huggingface docekr - Google Search
- Using Docker — vLLM
- Qwen/Qwen2.5-7B-Instruct-GGUF · why do you have split gguf's? ' invalid magic characters'
- Allow importing multi-file GGUF models · Issue #5245 · ollama/ollama
- Qwen/Qwen2.5-7B-Instruct · Hugging Face
- Qwen/Qwen2.5-7B-Instruct · Hugging Face
- Qwen (Qwen)
- Using Docker — vLLM
- Integration with HuggingFace — vLLM
- Python Multiprocessing — vLLM
- 阿里云控制台首页
- 云服务器管理控制台
- pyenv/pyenv: Simple Python version management
- raise ValueError(f No supported config format found in {model} ) ValueError: No supported config format found in Qwen/Qwen2.5-7B-Instruct-GGUF - Google Search
- vllm/vllm/transformers_utils/config.py at main · vllm-project/vllm
- devops/docs: 运维部文档 - Gogs
- Ollama
- qwen2.5
- V2EX
- User Agent Switcher and Manager :: WebExtension.ORG
- guiodic/material-decoration: Material-ish window decoration theme for KWin, with LIM, based on zzag's original design.
- New Tab
- [Linux] vnStat 網路流量監控工具使用教學 - 靖技場
- vmstat 流量分析工具-CSDN博客
- CSDN - 专业开发者社区
- Linux vmstat命令实战详解 - ggjucheng - 博客园
- Model config format error
- 使用 vllm GGUF 模型 - Google Search
- GGUF | vLLM 中文站
- Nice-Tab | 选项
- 添加调试打印 - 豆包
- Ollama model execution error
- vLLM(一)PagedAttention 算法 - 知乎
- vLLM皇冠上的明珠:深入浅出理解PagedAttention CUDA实现 - 知乎
- New Tab
- Qwen/Qwen2.5-0.5B-Instruct-AWQ · Hugging Face
- Qwen/Qwen2.5-0.5B-Instruct-GGUF · Hugging Face
- Quickstart - Qwen
- Ollama - Qwen
- oobabooga/text-generation-webui: A Gradio web UI for Large Language Models with support for multiple inference backends.
- Qwen/Qwen2.5-7B-Instruct-AWQ · Hugging Face
- Qwen/Qwen2.5-0.5B-Instruct-GGUF at main
- 允许导入多文件GGUF模型·发行#5245·Ollama/Ollama --- Allow importing multi-file GGUF models · Issue #5245 · ollama/ollama
- ollama/ollama: Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.
- Issue search results
- Using both CPU + GPU for Parallel Models · Issue #5659 · ollama/ollama
- Support loading concurrent model(s) on CPU when GPU is full · Issue #6950 · ollama/ollama
- Model Pooling and Instance Management · Issue #3902 · ollama/ollama
- 模型合并和实例管理·发行#3902·Ollama/Ollama --- Model Pooling and Instance Management · Issue #3902 · ollama/ollama
- Nvidia fallback memory · Issue #7584 · ollama/ollama
- Nvidia fallback memory · Issue #7584 · ollama/ollama
- Ollama ps to report actual number of layers instead of percentage. · Issue #7602 · ollama/ollama
- Memory Allocation on VRAM when model size is bigger than the size of VRAM · Issue #6864 · ollama/ollama
- cuda error out of memory · Issue #6382 · ollama/ollama
- ollama/docs/import.md at main · ollama/ollama
- 通义千问2.5-7B-Instruct-AWQ量化 · 模型库
- 阿里云控制台首页
- 云服务器管理控制台
- New Tab
- Error: no Modelfile or safetensors files found - Google Search
- Error: no safetensors or torch files found · Issue #4538 · ollama/ollama
- `model.safetensors` missing in model file not found error in default case · Issue #30601 · huggingface/transformers
- devops/docs: 运维部文档 - Gogs
- dev/xuqiu - Gogs
- dev/xuqiu - Gogs
- dev/xuqiu - Gogs
- ollama vllm speed - Google Search
- [Performance]: How to Improve Performance Under Concurrency · Issue #9722 · vllm-project/vllm
- Ollama vs vllm:哪种工具可以更好地处理AI模型? |由Naman Tripathi |中等的 --- Ollama vs VLLM: Which Tool Handles AI Models Better? | by Naman Tripathi | Medium
- Ollama或VLLM服务:R/Localllama --- Ollama or vllm for serving : r/LocalLLaMA
- reddit.com/r/LocalLLaMA/comments/1cew9fb/is_ollama_supposed_to_run_on_your_gpu/
- Ollama/Docs/FAQ.MD在Main·Ollama/Ollama --- ollama/docs/faq.md at main · ollama/ollama
- ollama/docs/gpu.md at main · ollama/ollama
- ollama/docs/development.md at main · ollama/ollama
- NVIDIA后备记忆·发行#7584·Ollama/Ollama --- Nvidia fallback memory · Issue #7584 · ollama/ollama
- How to run Ollama only on a dedicated GPU? (Instead of all GPUs) · Issue #1813 · ollama/ollama
- New Tab
- FileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/modelscope/hub/._____temp/Qwen/Qwen2.5-7B-Instruct-AWQ/tokenizer.json' - Google Search
- 训练下载模型报错,找不到文件夹: · Issue #5065 · hiyouga/LLaMA-Factory
- 我面对“ filenotfounderror:[errno 2]没有这样的文件或目录”错误-Hub-拥抱面孔论坛 --- I face "FileNotFoundError: [Errno 2] No such file or directory" error - 🤗Hub - Hugging Face Forums
- 用魔搭下base model的时候 会出现模型路径相关的错误 · Issue #4778 · hiyouga/LLaMA-Factory
- FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\admin/.cache\\huggingface\\hub\\models--bert-base-uncased\\refs\\main' - CSDN文库
- vllm docker - Google Search
- Image Layer Details - vllm/vllm-openai:v0.7.2 | Docker Hub
- vllm/Dockerfile at main · vllm-project/vllm
- Deploying with Docker — vLLM
- vllm/vllm-openai Tags | Docker Hub
- Nice-Tab Admin Page
- https://bbs.archlinuxcn.org/viewtopic.php?id=4611 - Trilium Web Clipper
- 未读
- V2EX
- Flask 备忘清单 & flask cheatsheet & Quick Reference
- 新标签页
- Nice-Tab Admin Page
- 腾讯元宝 - 轻松工作 多点生活
- 豆包
- MySQL 超时设置问题
- v5.35.0
- 黑客新闻 --- Hacker News
- V2EX
- 看到一些 github 项目作者,总是千方百计利诱网友给予 star - V2EX
- 为什么小米的产品可以这么稳? - V2EX
- telegram 上传 工具 nas - Google 搜索
- 一个好用的 Telegram 文件批量上传/下载工具 · Dejavu's Blog
- Nekmo/telegram-upload: Upload and download files from Telegram up to 4 GiB using your account