Skip Navigation
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
[StackOverflow] If a PCI device is completely non-responsive, it's possible to completely remove the device and then re-scan it, hopefully re-initializing the device so it works again.
unix.stackexchange.com Reset a PCI Device in Linux

Is there a generic way to reset a PCI device in Linux from the command line? That is, cause the PCI bus to issue a reset command.

Reset a PCI Device in Linux

echo 1 | sudo tee /sys/bus/pci/<pci-id-of-device>/remove and then echo 1 | sudo tee /sys/bus/pci/rescan

0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
[GitHub] bduggan/raku-jupyter-kernel allows you to run Raku (né Perl 6) within a Jupyter Notebook environment. In terms of onboarding, this seems to be one of the easiest ways to start using Raku.
github.com GitHub - bduggan/raku-jupyter-kernel: Raku Kernel for Jupyter notebooks

Raku Kernel for Jupyter notebooks. Contribute to bduggan/raku-jupyter-kernel development by creating an account on GitHub.

GitHub - bduggan/raku-jupyter-kernel: Raku Kernel for Jupyter notebooks
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
[Paper] Optimizing Deep Learning Models For Raspberry Pi. Custom CNN (on MNIST data) performance from 114ms to 3.75ms. ResNet50 (on "flowers" data): from 1.1s to 1.0s (lowest) or 1.6s (highest).

I'm a little unsure on if I interpreted the results correctly. It seems like some things that TF Lite natively supports (apparently, their custom CNN model trained on MNIST) get really fast, and other things are a little hit-or-miss.

0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
TinyNeuralNetwork is a library to compress machine learning models through pruning, quantization, and more. Can also convert PyTorch models to TF Lite models.
github.com GitHub - alibaba/TinyNeuralNetwork: TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.

TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework. - alibaba/TinyNeuralNetwork

GitHub - alibaba/TinyNeuralNetwork: TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
Overview of machine learning frameworks that are supported on Raspberry Pi: OpenCV, TF Lite, Tencent ncnn, Tencent TNN, Alibaba MNN, Paddle Lite, ARMnn, MXNet + Gluon, PyTorch, and Caffe.
Deep learning software for Raspberry Pi and alternatives - Q-engineering
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
Arm NN is an optimized library of tensor operators for machine learning models to use. Support for TF Lite / ONNX models and Raspberry Pi 4 / armv7.
github.com GitHub - ARM-software/armnn: Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn

Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn - ARM-software/armnn

GitHub - ARM-software/armnn: Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
TextSynth is a hosted service for generating text completions using language models. Free and paid tiers. Could be useful to play with LLMs without a strong computer (Pricing discussion in body text).

I have linked the pricing page because I think that's the most important aspect to a service like this.

The price isn't too expensive, but it also isn't particular cheap either.

Compared to OpenAI's ChatGPT model and generating 1 million tokens (i.e. the King James Bible), you're looking at:

  • OpenAI's gpt-3.5-turbo ("ChatGPT-3.5") is $2 / 1m tokens
  • TextSynth's M2M100 1.2B (cheapest) is $3 / 1m tokens
  • OpenAI's gpt-4 ("ChatGPT-4") is $4 / 1m tokens
  • TextSynth's GPT-Neox 20B (most expensive) is $35 / 1m tokens
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
LaMini-LM is a collection of small language models that are accessible to run on local hardware without lots of resources. Models range from 250MB to 6.3GB.
github.com GitHub - mbzuai-nlp/LaMini-LM: LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions

LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions - mbzuai-nlp/LaMini-LM

GitHub - mbzuai-nlp/LaMini-LM: LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
jncraton/languagemodels is a simple Python library for running LLMs locally. Supports instruction and embedding use cases. Chooses models according to available RAM.
github.com GitHub - jncraton/languagemodels: Explore large language models in 512MB of RAM

Explore large language models in 512MB of RAM. Contribute to jncraton/languagemodels development by creating an account on GitHub.

GitHub - jncraton/languagemodels: Explore large language models in 512MB of RAM
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
Altoids tin for watercolor using sculpey modeling clay to create a custom tray for the paints
www.instructables.com Pocket-sized Watercolor Altoids Tin

Pocket-sized Watercolor Altoids Tin: Now that I have made this little kit I can't stop using it! I just started with Instructables, so excuse me if I make any mistakes... :) You will need: Altoids regular tin Altoids Smalls Sculpey clay color of your choice Watercolor tube paints Any …

Pocket-sized Watercolor Altoids Tin
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
Taming AI Bots: Prevent LLMs from entering "bad" states using continuous guidance from the LLM ("is this good? bad?") to avoid bad states.
0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
"Prompt Gisting:" Train two models such that given inputs "Translate French<G1><G2>" and "<G1>G2>The cat," then G1 and G2 represent the entire instruction.

Abstract: "Prompting is now the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and re-encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs without prompting, but require retraining the model for each task. To avoid this trade-off entirely, we present gisting, which trains an LM to compress prompts into smaller sets of "gist" tokens which can be reused for compute efficiency. Gist models can be easily trained as part of instruction finetuning via a restricted attention mask that encourages prompt compression. On decoder (LLaMA-7B) and encoder-decoder (FLAN-T5-XXL) LMs, gisting enables up to 26x compression of prompts, resulting in up to 40% FLOPs reductions, 4.2% wall time speedups, storage savings, and minimal loss in output quality. "

0
mediocreatbest @lemmy.sdf.org mediocreatbest @lemmy.sdf.org
An LLM prompt that is a special kind of summarizer for compressing an idea into as short a text ("tweet") as possible. Includes decompressor.

The prompt: "compress the following text in a way that fits in a tweet (ideally) and such that you (GPT-4) can reconstruct the intention of the human who wrote text as close as possible to the original intention. This is for yourself. It does not need to be human readable or understandable. Abuse of language mixing, abbreviations, symbols (unicode and emoji), or any other encodings or internal representations is all permissible, as long as it, if pasted in a new inference cycle, will yield near-identical results as the original text:"

0
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ME
mediocreatbest @lemmy.sdf.org
Posts 13
Comments 2