t_wの輪郭

Feedlyでフォローするボタン
よむ
読んだ読もう読む力読む、考える、書く自分で読む読む面白さ二折本を読む読まれる後で読む読む気力読み読む速さ電子書籍を読む読みたい読むべき読む速度本を読む読む本読む人 やはり記憶には発話・書く・読むが効果的なようだ図書館で哲学書を読む読みやすい空気を読む英文を読む読むだけ読んでない文章を読む読みかけ論文を読む読まないあれ

あれ

2025/6/2 22:59:00

ウェブページを読むときにフォントをデカくしないと集中して読めなくなってきた。
多くの要素が見える過ぎると集中できない。

脳がスマホに適応してきてる可能性も。


特に英語は拡大しないとマジで読めない。
脳が滑る。

『How Do Committees Invent?』『The Self-Organizing Economy』『Why is a knowledge management system crucial for your employees?』『Creating an end-to-end search experience with autocomplete and instant search results』『Build a React app with fast indexing and instant inventory updates』『Unlock Organizational Knowledge with Searchable Company-wide Data』『Special Feature: Knowledge Creation Design Methodology for Service Innovation』『Complex landscapes of spatial interaction』読みたい本Amazon Explore Is Giving Prime Members a Free Virtual Trip — Here’s How to Book Yours『Making Sense of Corporate Venture Capital』『バリュー・プロポジション・デザイン』あれ『CORECURSIVE #066 The Untold Story of SQLite With Richard Hipp』『Apple, Its Control Over the iPhone, and The Internet』『As Fashion Brands, Facebook Look to the Metaverse, What Does the Term Really Mean?』本を読みたい「wikiでページのURLをIDにすると絶対にうまくいかない」Offline Reinforcement Learning with Implicit Q-Learning『MQTT vs DDS in IoT | Difference between MQTT and DDS』『ROS Robot Programming』AI has become a design problem河崎 陽一「今Serverlessが面白いわけ v19.09」(SlideShare)。2019-09-19『複式簿記』Q Ethan McCallum:著,磯 蘭水:監訳,笹井 崇司:訳『バッドデータハンドブック』(O’Reilly)。2013-09『WhatIsaBliki』JSTQBシラバス『The Architect's Role in Community Shpherding』『nl2spec: Interactively Translating Unstructured Natural Language to Temporal Logics with Large Language Models』『Talk like a graph: Encoding graphs for large language models』『技術哲学入門』『DTGB: A Comprehensive Benchmark for Dynamic Text-Attributed Graphs』『Large Language Models on Graphs: A Comprehensive Survey』『「ChatGPT登場でGoogleはオワコン」説がオワコンに–その理由 - CNET Japan』『【記者発表】LLMの情報処理は感覚性失語症の脳活動と似ていた ―LLMと失語症との情報処理ダイナミクスの比較―』『From RAG to Memory: Non-Parametric Continual Learning for Large Language Models』『ファイナンス機械学習』『ELDER: Enhancing Lifelong Model Editing with Mixture-of-LoRA』『[2408.11869] ELDER: Enhancing Lifelong Model Editing with Mixture-of-LoRA』『HDMoLE: Mixture of LoRA Experts with Hierarchical Routing and Dynamic Thresholds for Fine-Tuning LLM-based ASR Models』『X-LoRA: Mixture of Low-Rank Adapter Experts, a Flexible Framework for Large Language Models with Applications in Protein Mechanics and Molecular Design』『fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving』『\model: Boosting Mixture of LoRA Experts Fine-Tuning with a Hybrid Routing Mechanism』『PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert Model』『eMoE: Task-aware Memory Efficient Mixture-of-Experts-Based (MoE) Model Inference』『Revisiting Model Stitching to Compare Neural Representations』『Explanatory models in neuroscience: Part 2 – constraint-based intelligibility』『When Do Curricula Work?』『Adversarial Examples Are Not Bugs, They Are Features』『畳み込み微分可能論理ゲート論文を読む|shi3z』『PostgreSQL active-active replication, do you really need it? | Percona Community』あとで読む『Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference』