YouTube Video
Click to view this content.
- https://github.com/HigherOrderCO/Bend
- https://higherorderco.com/
Tagging an image is simply associating a string value to an image pushed to a container registry, as a human readable identifier. Unlike an image ID or image digest sha, an image tag is only loosely associated, and can be remapped later to another image in the same registry repo, e.g latest
. Untagging is simply removing the tag from the registry, but not necessarily the associated image itself.
I had to go full Rube Goldberg to clean up old image tags from closed PRs, while still leaving deletion of untagged image to the ECR repo's own lifecycle policy. Never go full Rube Goldberg:
- https://stackoverflow.com/questions/70065254/remove-ecr-image-tag-despite-imagereferencedbymanifestlist-error
- https://github.com/aws/containers-roadmap/issues/1567
```yaml name: ECR Retention Policy
on: pull_request: types: - closed workflow_call: workflow_dispatch:
jobs: clean-unused-ecr: name: Delete unused container images runs-on: runs-on,runner=2cpu-linux-x64,run-id=${{ github.run_id }},image=ecr_login_image steps: - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-region: ${{ env.RUNS_ON_AWS_REGION }} - name: AWS ECR Login id: login-ecr uses: aws-actions/amazon-ecr-login@v2 - name: AWS ECR Info shell: bash run: | echo "ECR_REGISTRY=${{ steps.login-ecr.outputs.registry }}" >> $GITHUB_ENV echo "ECR_REPO=$(basename ${{ github.repository }})" >> $GITHUB_ENV - name: Docker meta id: docker_meta uses: docker/metadata-action@v5 with: images: ${{ env.ECR_REGISTRY }}/${{ env.ECR_REPO }} flavor: suffix=- tags: type=raw,value=${{ github.head_ref || github.ref_name }} # NOTE: This is convoluted because AWS ECR has no simple way to untag image without deletion # given we want to leave deletion of untagged image to the ECR repo's own lifecycle policy # https://stackoverflow.com/questions/70065254/remove-ecr-image-tag-despite-imagereferencedbymanifestlist-error # https://github.com/aws/containers-roadmap/issues/1567 - name: AWS ECR Cleanup shell: bash run: | REPO_EXISTS=$(aws ecr describe-repositories --repository-names $ECR_REPO 2>&1 || true) if echo "${REPO_EXISTS}" | grep -q 'RepositoryNotFoundException'; then echo "Repository not found, skipping cleanup." exit 0 fi IMAGE_TAGS=$(aws ecr list-images --repository-name $ECR_REPO --query 'imageIds[*].imageTag' --output text)
docker pull busybox docker tag busybox $ECR_REGISTRY/$ECR_REPO:_ docker push $ECR_REGISTRY/$ECR_REPO:_
TEMP_IMAGE=$( aws ecr batch-get-image \ --repository-name $ECR_REPO \ --image-ids imageTag=_ ) TEMP_MANIFEST=$(echo $TEMP_IMAGE | jq -r '.images[].imageManifest') TEMP_DIGEST=$(echo $TEMP_IMAGE | jq -r '.images[].imageId.imageDigest')
TAG_PREFIX=$(echo ${{ fromJSON(steps.docker_meta.outputs.json).tags[0] }} | cut -d: -f2) for TAG in $IMAGE_TAGS do if [[ $TAG == $TAG_PREFIX* ]]; then docker tag busybox $ECR_REGISTRY/$ECR_REPO:$TAG docker push $ECR_REGISTRY/$ECR_REPO:$TAG echo "Untaged image $TAG" fi done
# Delete the temporary image by digest aws ecr batch-delete-image \ --repository-name $ECR_REPO \ --image-ids imageDigest=$TEMP_DIGEST ```
Wow, the COPY
directive got a lot more powerful. I've been waiting for the --parent
flag for years, while the --exclude
argument is also a nice touch. Didn't know of the /./
pivot point before, but that's handy.
Before this, I've just been using a intermediary leaf stage within a multi-stage build process to copy the build context and filter the dependency lock files of the entire super project into a matching parent structure that I could then deterministically copy from.
Ah man, I'm with a project that already uses a poly repo setup and am starting an integration repo using submodules to coordinate the Dev environment and unify with CI/CD. Sub modules have been great for introspection and and versioning, rather than relying on some opaque configuration file to check out all the different poly repos at build time. I can click the the sub module links on GitHub and redirect right to the reference commit, while many IDEs can also already associate the respective git tag for each sub module when opening from the super project.
I was kind of bummed to hear that working trees didn't have full support with some modules. I haven't used working trees with this super project yet, but what did you find about its incompatibility with some modules? Are there certain porcelain commands just not supported, or certain behaviors don't work as expected? Have you tried the global git config to enable recursive over sub modules by default?
YouTube Video
Click to view this content.
cross-posted from: https://programming.dev/post/12247721
> 🔥 🚢 overviews the recent supply chain attack on XZ library.
I fell for it. It took me a minute into the game time to figure what was up and double check today's date.
YouTube Video
Click to view this content.
cross-posted from: https://programming.dev/post/12228684
> April fool's!
cross-posted from: https://programming.dev/post/11870760
Does the live iso created by this process include the dependencies or kernel modules upon live boot? E.g. could I use this to create an ISO image that includes, or pre bakes, any custom or necessary drivers for Nvidia GPUs or finicky Wi-Fi cards when used/booted as just a live USB? That could really help when you'd otherwise have a chicken and egg problem after a hard drive failure and no live USB to safe boot with working networking or display output.
I'm going to try and set one up for the rest of my project team. Looks like a neat way to simplify install setup.
YouTube Video
Click to view this content.
Wow! Didn't know it'd be that simple.
- https://nixos.wiki/wiki/Creating_a_NixOS_live_CD
YouTube Video
Click to view this content.
Note: video sponsored by Docker
I'm using a recent 42" LG OLED TV as a large affordable PC monitor in order to support 4K@120Hz+HDR@10bit, which is great for gaming or content creation that can appreciate the screen real estate. Anything in the proper PC Monitor market similarly sized or even slightly smaller costs way more per screen area and feature parity.
Unfortunately such TVs rarely include anything other than HDMI for digital video input, regardless of the growing trend connecting gaming PCs in the living room, like with fiber optic HDMI cables. I actually went with a GPU with more than one HDMI output so I could display to both TVs in the house simultaneously.
Also, having an API as well as a remote to control my monitor is kind of nice. Enough folks are using LG TVs as monitors for this midsize range that there even open source projects to entirely mimic conventional display behaviors:
I also kind of like using the TV as simple KVMs with less cables. For example with audio, I can independently control volume and mux output to either speakers or multiple Bluetooth devices from the TV, without having fiddle around with repairing Bluetooth peripherals to each PC or gaming console. That's particularly nice when swapping from playing games on the PC to watching movies on a Chromecast with a friend over two pairs of headphones, while still keeping the house quite for the family. That kind of KVM functionality and connectivity is still kind of a premium feature for modest priced PC monitors. Of course others find their own use cases for hacking the TV remote APIs:
> For three years there has been a bug report around 4K@120Hz being unavailable via HDMI 2.1 on the AMD Linux driver.
The wait continues...
YouTube Video
Click to view this content.
cross-posted from: https://programming.dev/post/10723262
> Didn't know about this case history with Nintendo, nor the name for the common exploit used: > - Game Genie > - https://en.wikipedia.org/wiki/Game_Genie > - Fusée Gelée exploit > - https://switch.homebrew.guide/gettingstarted/choosinganexploit.html#fusee-gelee > - https://medium.com/@SoyLatteChen/inside-fus%C3%A9e-gel%C3%A9e-the-unpatchable-entrypoint-for-nintendo-switch-hacking-26f42026ada0
YouTube Video
Click to view this content.
Didn't know about this case history with Nintendo, nor the name for the common exploit used:
- Game Genie
- https://en.wikipedia.org/wiki/Game_Genie
- Fusée Gelée exploit
- https://switch.homebrew.guide/gettingstarted/choosinganexploit.html#fusee-gelee
- https://medium.com/@SoyLatteChen/inside-fus%C3%A9e-gel%C3%A9e-the-unpatchable-entrypoint-for-nintendo-switch-hacking-26f42026ada0
Didn't know about this case history with Nintendo, nor the name for the common exploit used:
YouTube Video
Click to view this content.
Nice! Thanks for the clarification.
I was more curious about horizontal/vertical scroll snapping of text, given if the underlying vim properties are still limited to terminal style rendering of whole fractions of text lines and fixed characters, then it's less of a concern what exactly the GUI front end is.
Are you using the PWA, self hosted or via code spaces/other VPS? With which web browser?
I tried hosting code server via termux for a while, but a user proot felt too slow, even if the PWA UI ran silky smooth.
Perhaps when my warranty runs out I'll root the device to switch to using a proper chroot instead.
Do you use it combined with terminal emulators?
Wouldn't that result in vertical scroll snapping to textual lines, and horizontal scroll snapping to character widths?
A personal preference I suppose for navigation, but a bit jumpy to read from while moving rapidly.
Only just got a 120Hz monitor recently, so reading scrolling text now is so much easer and faster than before. Looking forward to any IDE that can match that kind of framerate performance as well.
Too bad I don't own a mac to be able to test out the current release of Zed as an IDE. However, I'm not sure about the growing trend of rasterizing the entire GUI, as compared to conventional text rendering methods or GUI libs with established accessibility support.
YouTube Video
Click to view this content.
cross-posted from: https://programming.dev/post/10375143
> https://zed.dev
YouTube Video
Click to view this content.
https://zed.dev
The smell of fresh pine sawdust filled the air, with more floating up as I sanded the last rough corner of the stool. My toddler was happily sanding her own block off to the side. Woodworking was a new hobby I'd picked up. My old ones, coding, reading, writing, had
Having recently picked up woodworking after building my own office desk, this hit rather close to home.
Related HN discussion:
- https://news.ycombinator.com/item?id=39337923
YouTube Video
Click to view this content.
cross-posted from: https://programming.dev/post/9437130
> - Visualizer: > - https://toml-to-json.orchard.blog/ > - Code: > - https://github.com/orcharddweller/tom... > - TOML spec: > - https://toml.io/en/v1.0.0
YouTube Video
Click to view this content.
cross-posted from: https://programming.dev/post/9437130
> - Visualizer: > - https://toml-to-json.orchard.blog/ > - Code: > - https://github.com/orcharddweller/tom... > - TOML spec: > - https://toml.io/en/v1.0.0
YouTube Video
Click to view this content.
- Visualizer:
- https://toml-to-json.orchard.blog/
- Code:
- https://github.com/orcharddweller/tom...
- TOML spec:
- https://toml.io/en/v1.0.0
You could get a fiber optic display/HDMI cable, a fiber optic USB cable, and the USB hub, then just move the desktop tower into another room and run the cables through the walls or ceilings to your display setup. Might only be $100 or so cheaper than then a used business thin client, but at least you could still do something 4K 120Hz HDR 12bit over some distance without compromise. E.g:
Looks like Moonlight does have their app up on the Apple store or iOS, and Sunlight has binaries for most operating systems. Personally, instead of Sunlight's server, I still use Nvidia's GeForce Experience software to stream games, as it takes less effort to configure. Of course, Nvidia may not be applicable if you're using integrated or AMD graphics instead.
Although, with Nvidia recently deprecating support for it's shield device, Sunlight provides support for the same protocol that Moonlight was originally developed against, but it's also open source. I've not used multi monitor streaming with GeForce Experience, something Sunlight would be much more flexible in configuring.
As for connectivity, I'm unsure if iOS supports the same USB network feature that Android has. I'd imagine at least the iPhone would, as that's a core feature/option for mobile hotspot connectivity, but maybe that's nixed from iPad iOS? Alternatively you could get yourself a USB C hub or dock with an ethernet adapter and pass through power delivery, so you can connect your iPad with a wired network and charge simultaneously.
Or you could just use Wi-Fi, but with wireless networks dropping and retrying packets, that'll impact latency or bitrate quality when casting displays. Although for something mostly static like discord windows, that's probably less of an issue. Windows 11, and maybe 10, also have a hotspot mode, where you could share your wired network via your PCs wireless radio via and ad hoc Wi-Fi SSID. That could reduce latency and improve signal reception, but you'd have to start the hotspot setting every session or whenever the device disconnects from windows' hotspot for more than 15 minutes or something.
You could try other remote display streaming software as well, like Parsec. However they have a online account login requirement with the freemium model, so I prefer the open source client Moonlight instead. However parsecs a lot easier too use when streaming from outside your home, or when remotely single screen co-oping with friends, without having to configure firewalls or domain names.
If you already have a similarly sized tablet, you could just buy a dummy HDMI plug, a few dollars, to add a second virtual desktop and then simply cast that screen to the mobile device.
There are pretty nice Android tablets now with 2.5k 120 hz HDR OLD screens. You can just connect it directly to the computer via USB, enable USB network tethering, then use something like the Moonlight client app with Sunshine screen casting server. With the wired connection, and a high bit rate such as 150 Mbps, you can get single digit millisecond latency and hardly tell the difference from an native HDMI display.
Tablets like those might be on the high end, but at least you'd have nice secondary display that's a bit more multifunctional. Or just go with a cheaper LCD based tablet or old iPad, if color accuracy, refresh rate, or resolution isn't a priority.
A while back, I tried looking into what it would take to modify Android to disable Bluetooth microphones for wireless headsets, allowing for call audio to be streamed via regular AAC or aptX, and for the call microphone to be captured from the phones internal mic. This would prevent the bit rate for call audio in microphone being effectively halved when using the ancient HFP/HSP Bluetooth codecs, instead allowing for the same call quality as when using a wired headset. This would help when multitasking with different audio sources, such as listening to music while hanging out on discord, without the music being distorted from the lower bit rate of HFP/HSP. This would also benefit regular VoLTE, as the regular call audio quality already exceeds that of legacy Bluetooth headset profiles.
Although, I didn't manage to tease apart the mechanics of the audio policy configuration files used by the source Android project, given the sparse documentation and vague commit history.
- https://source.android.com/docs/core/audio/implement-policy
- https://android.googlesource.com/platform/frameworks/av/+/dc46286/services/audiopolicy/config/audio_policy_configuration.xml#147
I'd certainly be fine with the awkwardness of holding up and speaking to my phone as if it was in speaker mode, but listening to the call over wireless headphones, in order to improve or double the audio quality. Always wondered what these audio policies fall back to when a Bluetooth device doesn't have a headset profile, but it's almost impossible to find high quality consumer grade Bluetooth headphones without a microphone nowadays.
For the call setting under Bluetooth audio devices, I really wish they would break out or separate the settings for using the audio device as a source or sink for call audio. Sort of like how you can disable HSP/HSF Bluetooth profiles for audio devices in Linux or Windows.
Is this about a new movie or session?
The post is just a thumbnail image.
I'm a robotics researcher. My interests include cybersecurity, repeatable & reproducible research, as well as open source robotics and rust programing.