Microsoft Upgrades AI Suite Overnight! Announced Support for Llama 2 Large Model, $30/month Rental for GPT-4 Version of Office


Moonshot推出的Kimi K2.5 model, its Kimi Agent office capabilities have significantly improved, entering the "Mastering Office" era. This AI can now deeply process Excel, Word, PDF, and PPT files, directly generating professional-level results. The core upgrade lies in an exponential leap in efficiency, shortening mechanical tasks that originally took hours or even days to just a few minutes, such as extracting key information from hundreds of pages of PDF.
AI fine-tuned with just two books mimics authors' styles, outperforming human imitators in evaluations by 159 participants, including experts.....
OpenAI recently sparked controversy due to secret model switching. Paid users reported that their GPT-4/5 were automatically replaced with low-computing power filtered models gpt-5-chat-safety and gpt-5-a-t-mini without prior notice, especially causing a sharp decline in response quality when dealing with sensitive content. This move has been questioned by users for infringing on their right to choose and be informed, highlighting the issue of insufficient platform transparency.
["IBM's research shows that it is very easy to deceive large language models into generating malicious code or providing false security advice.","Hackers only need some basic knowledge of English and an understanding of the model's training data to easily deceive AI chatbots.","Different AI models have different sensitivities to deception, and GPT-3.5 and GPT-4 are relatively easy to deceive."]
A recent joint study by Google, Carnegie Mellon University, and MultiOn explores the application of synthetic data in training large language models. According to Epoch AI, a research institution focused on AI development, currently available high-quality text training data totals around 300 trillion tokens. However, with the rapid advancement of large models like ChatGPT, the demand for training data is growing exponentially, projected to exhaust existing resources by 2026. Therefore, synthetic data is becoming increasingly crucial.