moving everything to active or retired vs incubating and graduated
All checks were successful
Reese's Arch Toolbox / build-and-push-arch-toolbox (push) Successful in 14s
All checks were successful
Reese's Arch Toolbox / build-and-push-arch-toolbox (push) Successful in 14s
This commit is contained in:
15
active/ansible/ansible.md
Normal file
15
active/ansible/ansible.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Ansible Inventory
|
||||
|
||||
This project lives outside the typical project structures as a means of tracking inventory.
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
pacman -S ansible ansible-core python-kubernetes
|
||||
```
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
ansible fedora -m ping -i ansible/inventory.yaml
|
||||
```
|
||||
16
active/ansible/inventory.yaml
Normal file
16
active/ansible/inventory.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
fedora:
|
||||
hosts:
|
||||
3dserver:
|
||||
podman:
|
||||
kubernetes:
|
||||
docker:
|
||||
yellow:
|
||||
borg:
|
||||
|
||||
ubuntu:
|
||||
hosts:
|
||||
unifi-external:
|
||||
|
||||
caddy:
|
||||
hosts:
|
||||
3dserver:
|
||||
12
active/aws_cli/aws_cli.md
Normal file
12
active/aws_cli/aws_cli.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# AWS CLI
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
# Run as root
|
||||
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
|
||||
unzip awscliv2.zip && \
|
||||
./aws/install && \
|
||||
rm -f ./awscliv2.zip && \
|
||||
rm -rf ./aws
|
||||
```
|
||||
79
active/aws_iam/aws_iam.md
Normal file
79
active/aws_iam/aws_iam.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# AWS Credentials
|
||||
|
||||
Note: this requires the AWS CLI. See [AWS CLI](/active/aws_cli/aws_cli.md)
|
||||
|
||||
- [AWS Credentials](#aws-credentials)
|
||||
- [Route53 Credential Generation](#route53-credential-generation)
|
||||
- [AWS Certbot Route53 Policies](#aws-certbot-route53-policies)
|
||||
- [Email Credentials](#email-credentials)
|
||||
|
||||
## Route53 Credential Generation
|
||||
|
||||
```bash
|
||||
export AWS_USERNAME=
|
||||
aws iam create-user --user-name $AWS_USERNAME
|
||||
|
||||
# Allow updating reeseapps
|
||||
aws iam attach-user-policy --user-name $AWS_USERNAME --policy-arn $(cat active/aws_iam/secrets/update-reeseapps-iam-policy-arn)
|
||||
|
||||
# Allow updating reeselink
|
||||
aws iam attach-user-policy --user-name $AWS_USERNAME --policy-arn $(cat active/aws_iam/secrets/update-reeselink-iam-policy-arn)
|
||||
|
||||
# Create credentials (run aws configure on the machine that needs these to input them manually)
|
||||
aws iam create-access-key --user-name $AWS_USERNAME
|
||||
```
|
||||
|
||||
## AWS Certbot Route53 Policies
|
||||
|
||||
Example Policy:
|
||||
|
||||
active/aws_iam/secrets/route53_reeselink.json
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"route53:ListHostedZones",
|
||||
"route53:GetChange"
|
||||
],
|
||||
"Resource": [
|
||||
"*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"route53:ChangeResourceRecordSets",
|
||||
"route53:ListResourceRecordSets"
|
||||
],
|
||||
"Resource": [
|
||||
"arn:aws:route53:::hostedzone/<zone_id>"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
# Allow updating route53 records for reeselink.com
|
||||
aws iam create-policy --policy-name update-reeselink --policy-document file://active/aws_iam/secrets/route53_reeselink_policy.json
|
||||
|
||||
# Allow updating route53 records for reeseapps.com
|
||||
aws iam create-policy --policy-name update-reeseapps --policy-document file://active/aws_iam/secrets/route53_reeseapps_policy.json
|
||||
```
|
||||
|
||||
## Email Credentials
|
||||
|
||||
<https://docs.aws.amazon.com/ses/latest/dg/smtp-credentials.html>
|
||||
|
||||
You can technically do this through the CLI, see above link.
|
||||
|
||||
1. Log into the AWS console
|
||||
2. Navigate to SES
|
||||
3. Click "SMTP Settings"
|
||||
4. Click "Create SMTP Credentials"
|
||||
5. Name it "ses-smtp-user.something"
|
||||
6. Copy the username and password
|
||||
30
active/aws_route53/aws-route53.md
Normal file
30
active/aws_route53/aws-route53.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Network Management
|
||||
|
||||
- [Network Management](#network-management)
|
||||
- [Reeseapps vs Reeselink](#reeseapps-vs-reeselink)
|
||||
- [Reeselink Addresses](#reeselink-addresses)
|
||||
- [Reeseapps Addresses](#reeseapps-addresses)
|
||||
|
||||
## Reeseapps vs Reeselink
|
||||
|
||||
.reeseapps domains are for hosted service that do something. They are usually accessible via the
|
||||
web and are usually public. Web apps, Minecraft servers, other game servers, etc. are all reeseapps
|
||||
domains.
|
||||
|
||||
.reeselink domains are for linking machines together. They are for SSH, Cockpit, NFS, SMB, ISCSI,
|
||||
and other machine to machine connections. They can be public or private and are mostly for
|
||||
convenience.
|
||||
|
||||
## Reeselink Addresses
|
||||
|
||||
See `example-record-file.json` for example contents of `file://active/aws_route53/secrets/aws/reeselink.json`.
|
||||
|
||||
```bash
|
||||
aws route53 change-resource-record-sets --hosted-zone-id $(cat active/aws_route53/secrets/reeselink-zoneid) --change-batch file://active/aws_route53/secrets/reeselink.json
|
||||
```
|
||||
|
||||
## Reeseapps Addresses
|
||||
|
||||
```bash
|
||||
aws route53 change-resource-record-sets --hosted-zone-id $(cat active/aws_route53/secrets/reeseapps-zoneid) --change-batch file://active/aws_route53/secrets/reeseapps.json
|
||||
```
|
||||
31
active/aws_route53/example-record-file.json
Normal file
31
active/aws_route53/example-record-file.json
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"Comment": "CREATE/UPSERT/DELETE a record ",
|
||||
"Changes": [
|
||||
{
|
||||
"Action": "UPSERT",
|
||||
"ResourceRecordSet": {
|
||||
"Name": "something.myhost.com",
|
||||
"Type": "AAAA",
|
||||
"TTL": 300,
|
||||
"ResourceRecords": [
|
||||
{
|
||||
"Value": "abcd:abcd:abcd:abcd:abcd:abcd:abcd:abcd"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"Action": "UPSERT",
|
||||
"ResourceRecordSet": {
|
||||
"Name": "another.myhost.com",
|
||||
"Type": "AAAA",
|
||||
"TTL": 300,
|
||||
"ResourceRecords": [
|
||||
{
|
||||
"Value": "c0de:c0de:c0de:c0de:c0de:c0de:c0de:c0de"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
3
active/aws_ses/aws_ses.md
Normal file
3
active/aws_ses/aws_ses.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# AWS SES
|
||||
|
||||
AWS Simple Email Service
|
||||
25
active/device_bambu_p1s/bambu_p1s.md
Normal file
25
active/device_bambu_p1s/bambu_p1s.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Bambu P1S Notes
|
||||
|
||||
## Bambu P1S
|
||||
|
||||
### WiFi
|
||||
|
||||
If you want to set up WiFi without logging in follow this:
|
||||
|
||||
<https://wiki.bambulab.com/en/p1/manual/p1-sd-card-network-configuration-guide>
|
||||
|
||||
### Connecting via LAN mode
|
||||
|
||||
- Make sure you open 1990 and 2021 (tcp + udp) in your firewall application.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### Under Extruding
|
||||
|
||||
I've had an issue where prints will look "clumpy" or "rough". This was resolved after a complete
|
||||
hotend swap. I did some testing on the bad nozzle before replacing it and found it could still
|
||||
extrude decently but I had to raise the temperature 50-60°C above 220° to get the plastic flowing
|
||||
well. I don't think the ceramic heater was broken since the nozzle was clearly heating just fine.
|
||||
This leads me to believe the thermistor went bad or became disconnected from the nozzle somehow. My
|
||||
guess is Bambu didn't use enough thermal compound at the factory and it became dry and cracked,
|
||||
creating a layer of air between the thermistor and the nozzle.
|
||||
38
active/device_bangle.js/bangle.js.md
Normal file
38
active/device_bangle.js/bangle.js.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# Bangle JS
|
||||
|
||||
## Browser
|
||||
|
||||
<https://www.espruino.com/Quick+Start+BLE#with-web-bluetooth>
|
||||
|
||||
## Android
|
||||
|
||||
<https://github.com/espruino/BangleApps/wiki#phone-integration>
|
||||
|
||||
## Apps
|
||||
|
||||
- [Android](https://banglejs.com/apps/?id=android)
|
||||
- [Home Assistant](https://banglejs.com/apps/?id=ha)
|
||||
- [Spotify](https://banglejs.com/apps/?id=spotrem)
|
||||
- [Find Phone](https://banglejs.com/apps/?id=findphone)
|
||||
- [Weather](https://banglejs.com/apps/?id=weather)
|
||||
|
||||
## Sleep Data
|
||||
|
||||
Export in javascript format.
|
||||
|
||||
Use `=(A7/(1000*60*60*24)+25569)+(-4/24)` to convert javascript to date.
|
||||
|
||||
Use the following table to convert `sleep` values to human-readable:
|
||||
|
||||
| key | value |
|
||||
| --- | ----------- |
|
||||
| 0 | unknown |
|
||||
| 1 | not worn |
|
||||
| 2 | awake |
|
||||
| 3 | light sleep |
|
||||
| 4 | deep sleep |
|
||||
|
||||
Use `=VLOOKUP(<raw sleep value>,<cell with 0>:<cell with deep sleep>,2)` to convert number to human
|
||||
readable value.
|
||||
|
||||
Use `=(<end date>-<start date>)*1440` to calculate minutes between each timestamp.
|
||||
240
active/device_framework_16/dgpu_wake_from_sleep_post.md
Normal file
240
active/device_framework_16/dgpu_wake_from_sleep_post.md
Normal file
@@ -0,0 +1,240 @@
|
||||
## Laptop Specs
|
||||
|
||||
Operating System: Fedora Linux 41
|
||||
KDE Plasma Version: 6.3.4
|
||||
KDE Frameworks Version: 6.12.0
|
||||
Qt Version: 6.8.2
|
||||
Kernel Version: 6.13.9-200.fc41.x86_64 (64-bit)
|
||||
Graphics Platform: Wayland
|
||||
Processors: 16 × AMD Ryzen 9 7940HS w/ Radeon 780M Graphics
|
||||
Memory: 98.9 GB of RAM
|
||||
Graphics Processor: AMD Radeon 780M
|
||||
Manufacturer: Framework
|
||||
Product Name: Laptop 16 (AMD Ryzen 7040 Series)
|
||||
System Version: AJ
|
||||
|
||||
## The Problem
|
||||
|
||||
Hey everyone, I've had no luck searching for this issue online. I might be looking in the wrong places so please point me to existing posts if there's already a topic in flight on this, or another, forum.
|
||||
|
||||
My Framework 16's dGPU disconnects after waking from sleep. It won't show up in monitoring software (Mission Center from Flatpak) or MangoHud in games. It's also not being detected by steam games or ollama - both will only detect and use the internal gpu. I'm pretty sure it shows up in `lscpi` (shown below in dGPU disconnected state).
|
||||
|
||||
No bios settings have made a difference. This happens both on-battery and charging with the framework charger and a 240 watt Delta charger. Power profiles don't make a difference either.
|
||||
|
||||
I have secure boot on. I don't use hibernate. I have a couple custom udev rules that prevent usb devices from waking the laptop in sleep (shown below).
|
||||
|
||||
Looking for anything to try, thanks for the help!
|
||||
|
||||
## Details
|
||||
|
||||
### lspci
|
||||
|
||||
```bash
|
||||
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Root Complex
|
||||
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Phoenix IOMMU
|
||||
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Dummy Host Bridge
|
||||
00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Phoenix GPP Bridge
|
||||
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Dummy Host Bridge
|
||||
00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Phoenix GPP Bridge
|
||||
00:02.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Phoenix GPP Bridge
|
||||
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Dummy Host Bridge
|
||||
00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 19h USB4/Thunderbolt PCIe tunnel
|
||||
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Dummy Host Bridge
|
||||
00:04.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 19h USB4/Thunderbolt PCIe tunnel
|
||||
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Dummy Host Bridge
|
||||
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Internal GPP Bridge to Bus [C:A]
|
||||
00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Internal GPP Bridge to Bus [C:A]
|
||||
00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Internal GPP Bridge to Bus [C:A]
|
||||
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 71)
|
||||
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
|
||||
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Data Fabric; Function 0
|
||||
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Data Fabric; Function 1
|
||||
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Data Fabric; Function 2
|
||||
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Data Fabric; Function 3
|
||||
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Data Fabric; Function 4
|
||||
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Data Fabric; Function 5
|
||||
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Data Fabric; Function 6
|
||||
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Phoenix Data Fabric; Function 7
|
||||
01:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch (rev 12)
|
||||
02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch (rev 12)
|
||||
03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 33 [Radeon RX 7600/7600 XT/7600M XT/7600S/7700S / PRO W7600] (rev c1)
|
||||
03:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Navi 31 HDMI/DP Audio
|
||||
04:00.0 Network controller: Intel Corporation Wi-Fi 6E(802.11ax) AX210/AX1675* 2x2 [Typhoon Peak] (rev 1a)
|
||||
05:00.0 Non-Volatile memory controller: Seagate Technology PLC E18 PCIe SSD (rev 01)
|
||||
c4:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Phoenix1 (rev c1)
|
||||
c4:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt Radeon High Definition Audio Controller
|
||||
c4:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Phoenix CCP/PSP 3.0 Device
|
||||
c4:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b9
|
||||
c4:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15ba
|
||||
c4:00.5 Multimedia controller: Advanced Micro Devices, Inc. [AMD] ACP/ACP3X/ACP6x Audio Coprocessor (rev 63)
|
||||
c4:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h/19h/1ah HD Audio Controller
|
||||
c5:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Phoenix Dummy Function
|
||||
c5:00.1 Signal processing controller: Advanced Micro Devices, Inc. [AMD] AMD IPU Device
|
||||
c6:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Phoenix Dummy Function
|
||||
c6:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15c0
|
||||
c6:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15c1
|
||||
c6:00.5 USB controller: Advanced Micro Devices, Inc. [AMD] Pink Sardine USB4/Thunderbolt NHI controller #1
|
||||
c6:00.6 USB controller: Advanced Micro Devices, Inc. [AMD] Pink Sardine USB4/Thunderbolt NHI controller #2
|
||||
```
|
||||
|
||||
### UDev Rules
|
||||
|
||||
```conf
|
||||
ACTION=="add", SUBSYSTEM=="acpi", DRIVERS=="button", ATTRS{hid}=="PNP0C0D", ATTR{power/wakeup}="disabled"
|
||||
ACTION=="add", SUBSYSTEM=="serio", DRIVERS=="atkbd", ATTR{power/wakeup}="disabled"
|
||||
ACTION=="add", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTRS{name}=="PIXA3854:00", ATTR{power/wakeup}="disabled"
|
||||
ACTION=="add", SUBSYSTEM=="usb", DRIVERS=="usb", ATTR{power/wakeup}="disabled"
|
||||
```
|
||||
|
||||
### Dmesg
|
||||
|
||||
```bash
|
||||
[27962.377892] CPU: 11 UID: 0 PID: 187288 Comm: kworker/11:6 Tainted: G W 6.13.9-200.fc41.x86_64 #1
|
||||
[27962.377896] Tainted: [W]=WARN
|
||||
[27962.377898] Hardware name: Framework Laptop 16 (AMD Ryzen 7040 Series)/FRANMZCP09, BIOS 03.05 11/13/2024
|
||||
[27962.377901] Workqueue: pm pm_runtime_work
|
||||
[27962.377906] RIP: 0010:amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27962.378233] Code: c0 74 33 48 8b 4e 10 48 83 39 00 74 29 89 d1 48 8d 04 88 8b 08 85 c9 74 11 f0 ff 08 74 07 31 c0 e9 0a a3 c6 fa e9 1a fd ff ff <0f> 0b b8 ea ff ff ff e9 f9 a2 c6 fa b8 ea ff ff ff e9 ef a2 c6 fa
|
||||
[27962.378237] RSP: 0018:ffffb598a9667c98 EFLAGS: 00010246
|
||||
[27962.378241] RAX: ffff9a9fe45a0ea8 RBX: ffff9a9fdb880000 RCX: 0000000000000000
|
||||
[27962.378244] RDX: 0000000000000000 RSI: ffff9a9fdb8a5560 RDI: ffff9a9fdb880000
|
||||
[27962.378246] RBP: ffff9a9fdb8c55d0 R08: 0000000000000000 R09: 0000000001195b5a
|
||||
[27962.378249] R10: ffffb598a9667c48 R11: 0000000000000000 R12: 0000000000000006
|
||||
[27962.378251] R13: ffff9a9fdb880000 R14: 0000000000000001 R15: ffff9aa5dcb2b040
|
||||
[27962.378254] FS: 0000000000000000(0000) GS:ffff9ab67ff80000(0000) knlGS:0000000000000000
|
||||
[27962.378257] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
|
||||
[27962.378260] CR2: 00001f4402539014 CR3: 000000161c82c000 CR4: 0000000000f50ef0
|
||||
[27962.378263] PKRU: 55555554
|
||||
[27962.378265] Call Trace:
|
||||
[27962.378268] <TASK>
|
||||
[27962.378270] ? srso_alias_return_thunk+0x5/0xfbef5
|
||||
[27962.378274] ? show_trace_log_lvl+0x255/0x2f0
|
||||
[27962.378280] ? show_trace_log_lvl+0x255/0x2f0
|
||||
[27962.378289] ? gfx_v11_0_hw_fini+0x41/0xf0 [amdgpu]
|
||||
[27962.378597] ? amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27962.378906] ? __warn.cold+0x93/0xfa
|
||||
[27962.378912] ? amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27962.379216] ? report_bug+0xff/0x140
|
||||
[27962.379222] ? handle_bug+0x58/0x90
|
||||
[27962.379226] ? exc_invalid_op+0x17/0x70
|
||||
[27962.379230] ? asm_exc_invalid_op+0x1a/0x20
|
||||
[27962.379239] ? amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27962.379549] ? srso_alias_return_thunk+0x5/0xfbef5
|
||||
[27962.379553] gfx_v11_0_hw_fini+0x41/0xf0 [amdgpu]
|
||||
[27962.379879] gfx_v11_0_suspend+0xe/0x20 [amdgpu]
|
||||
[27962.380210] amdgpu_ip_block_suspend+0x24/0x40 [amdgpu]
|
||||
[27962.380523] amdgpu_device_ip_suspend_phase2+0x125/0x340 [amdgpu]
|
||||
[27962.380830] amdgpu_device_suspend+0xcf/0x170 [amdgpu]
|
||||
[27962.381154] amdgpu_pmops_runtime_suspend+0xb9/0x1a0 [amdgpu]
|
||||
[27962.381488] pci_pm_runtime_suspend+0x67/0x1a0
|
||||
[27962.381494] ? __pfx_pci_pm_runtime_suspend+0x10/0x10
|
||||
[27962.381499] __rpm_callback+0x41/0x170
|
||||
[27962.381503] ? __pfx_pci_pm_runtime_suspend+0x10/0x10
|
||||
[27962.381508] rpm_callback+0x55/0x60
|
||||
[27962.381512] ? __pfx_pci_pm_runtime_suspend+0x10/0x10
|
||||
[27962.381516] rpm_suspend+0xe6/0x5f0
|
||||
[27962.381520] ? srso_alias_return_thunk+0x5/0xfbef5
|
||||
[27962.381523] ? finish_task_switch.isra.0+0x99/0x2c0
|
||||
[27962.381531] pm_runtime_work+0x98/0xb0
|
||||
[27962.381535] process_one_work+0x176/0x330
|
||||
[27962.381541] worker_thread+0x252/0x390
|
||||
[27962.381547] ? __pfx_worker_thread+0x10/0x10
|
||||
[27962.381551] kthread+0xcf/0x100
|
||||
[27962.381557] ? __pfx_kthread+0x10/0x10
|
||||
[27962.381562] ret_from_fork+0x31/0x50
|
||||
[27962.381568] ? __pfx_kthread+0x10/0x10
|
||||
[27962.381573] ret_from_fork_asm+0x1a/0x30
|
||||
[27962.381583] </TASK>
|
||||
[27962.381587] ---[ end trace 0000000000000000 ]---
|
||||
[27965.093916] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27965.093923] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27967.860541] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27967.860554] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27970.792292] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27970.792306] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27973.717776] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27973.717790] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27976.642661] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27976.642670] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27979.503724] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27979.503737] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27982.355092] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27982.355102] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27985.020353] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27985.020365] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27987.689301] amdgpu 0000:03:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
|
||||
[27987.689308] [drm:amdgpu_mes_unmap_legacy_queue [amdgpu]] *ERROR* failed to unmap legacy queue
|
||||
[27987.916382] [drm:gfx_v11_0_hw_fini [amdgpu]] *ERROR* failed to halt cp gfx
|
||||
[27987.916872] ------------[ cut here ]------------
|
||||
[27987.916873] WARNING: CPU: 11 PID: 187288 at drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c:631 amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27987.917063] Modules linked in: ib_core uinput overlay rfcomm snd_seq_dummy snd_hrtimer nft_reject_ipv6 nft_masq nft_reject_ipv4 act_csum cls_u32 sch_htb nf_nat_tftp nf_conntrack_tftp bridge stp llc nf_conntrack_netbios_ns nf_conntrack_broadcast nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nf_tables qrtr uhid bnep snd_hda_codec_realtek snd_hda_codec_generic snd_hda_scodec_component snd_hda_codec_hdmi sunrpc binfmt_misc vfat fat snd_sof_amd_acp70 snd_sof_amd_acp63 snd_sof_amd_vangogh snd_sof_amd_rembrandt snd_sof_amd_renoir snd_sof_amd_acp snd_sof_pci iwlmvm snd_sof_xtensa_dsp snd_sof cdc_mbim cdc_wdm cdc_ncm cdc_ether usbnet snd_sof_utils mac80211 snd_pci_ps snd_soc_acpi_amd_match snd_amd_sdw_acpi soundwire_amd soundwire_generic_allocation soundwire_bus snd_soc_sdca amd_atl intel_rapl_msr intel_rapl_common snd_soc_core libarc4 snd_hda_intel edac_mce_amd snd_intel_dspcfg snd_compress
|
||||
[27987.917127] cros_usbpd_charger leds_cros_ec gpio_cros_ec cros_charge_control led_class_multicolor cros_ec_chardev ac97_bus cros_ec_hwmon cros_ec_sysfs cros_usbpd_logger cros_usbpd_notify snd_intel_sdw_acpi snd_pcm_dmaengine snd_hda_codec kvm_amd hid_sensor_als snd_rpl_pci_acp6x snd_acp_pci hid_sensor_trigger cros_ec_dev snd_hda_core hid_sensor_iio_common snd_acp_legacy_common kvm btusb iwlwifi industrialio_triggered_buffer snd_pci_acp6x btrtl btintel snd_hwdep kfifo_buf btbcm snd_seq btmtk cros_ec_lpcs spd5118 cros_ec snd_seq_device industrialio rapl cfg80211 wmi_bmof bluetooth snd_pcm snd_pci_acp5x pcspkr snd_rn_pci_acp3x snd_timer r8152 snd_acp_config thunderbolt mii k10temp snd_soc_acpi i2c_piix4 snd rfkill i2c_smbus snd_pci_acp3x amd_pmf soundcore amdtee joydev amd_sfh tee platform_profile amd_pmc loop nfnetlink zram lz4hc_compress lz4_compress dm_crypt hid_logitech_hidpp hid_logitech_dj typec_displayport amdgpu amdxcp drm_exec gpu_sched drm_panel_backlight_quirks drm_buddy drm_ttm_helper ttm nvme i2c_algo_bit
|
||||
[27987.917196] drm_suballoc_helper crct10dif_pclmul crc32_pclmul drm_display_helper crc32c_intel nvme_core polyval_clmulni polyval_generic ghash_clmulni_intel video hid_multitouch sha512_ssse3 ucsi_acpi hid_sensor_hub typec_ucsi sha256_ssse3 sha1_ssse3 cec sp5100_tco typec nvme_auth wmi i2c_hid_acpi i2c_hid fuse i2c_dev
|
||||
[27987.917222] CPU: 11 UID: 0 PID: 187288 Comm: kworker/11:6 Tainted: G W 6.13.9-200.fc41.x86_64 #1
|
||||
[27987.917226] Tainted: [W]=WARN
|
||||
[27987.917227] Hardware name: Framework Laptop 16 (AMD Ryzen 7040 Series)/FRANMZCP09, BIOS 03.05 11/13/2024
|
||||
[27987.917229] Workqueue: pm pm_runtime_work
|
||||
[27987.917235] RIP: 0010:amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27987.917403] Code: c0 74 33 48 8b 4e 10 48 83 39 00 74 29 89 d1 48 8d 04 88 8b 08 85 c9 74 11 f0 ff 08 74 07 31 c0 e9 0a a3 c6 fa e9 1a fd ff ff <0f> 0b b8 ea ff ff ff e9 f9 a2 c6 fa b8 ea ff ff ff e9 ef a2 c6 fa
|
||||
[27987.917405] RSP: 0018:ffffb598a9667c68 EFLAGS: 00010246
|
||||
[27987.917408] RAX: ffff9a9fe45a0508 RBX: ffff9a9fc7c27000 RCX: 0000000000000000
|
||||
[27987.917410] RDX: 0000000000000000 RSI: ffff9a9fc7c27008 RDI: ffff9a9fdb880000
|
||||
[27987.917411] RBP: ffff9a9fdb880000 R08: ffffffffbc873d28 R09: ffff9a9fc04020e8
|
||||
[27987.917412] R10: ffffffffffffffff R11: 0000000000000000 R12: 0000000000000004
|
||||
[27987.917413] R13: ffff9a9fdb880000 R14: 0000000000000001 R15: ffff9aa5dcb2b040
|
||||
[27987.917415] FS: 0000000000000000(0000) GS:ffff9ab67ff80000(0000) knlGS:0000000000000000
|
||||
[27987.917416] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
|
||||
[27987.917418] CR2: 00001f4402539014 CR3: 000000161c82c000 CR4: 0000000000f50ef0
|
||||
[27987.917419] PKRU: 55555554
|
||||
[27987.917420] Call Trace:
|
||||
[27987.917423] <TASK>
|
||||
[27987.917425] ? srso_alias_return_thunk+0x5/0xfbef5
|
||||
[27987.917429] ? show_trace_log_lvl+0x255/0x2f0
|
||||
[27987.917435] ? show_trace_log_lvl+0x255/0x2f0
|
||||
[27987.917439] ? smu_smc_hw_cleanup+0x68/0xa0 [amdgpu]
|
||||
[27987.917633] ? amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27987.917785] ? __warn.cold+0x93/0xfa
|
||||
[27987.917789] ? amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27987.917930] ? report_bug+0xff/0x140
|
||||
[27987.917934] ? handle_bug+0x58/0x90
|
||||
[27987.917937] ? exc_invalid_op+0x17/0x70
|
||||
[27987.917939] ? asm_exc_invalid_op+0x1a/0x20
|
||||
[27987.917945] ? amdgpu_irq_put+0x46/0x70 [amdgpu]
|
||||
[27987.918084] ? srso_alias_return_thunk+0x5/0xfbef5
|
||||
[27987.918087] smu_smc_hw_cleanup+0x68/0xa0 [amdgpu]
|
||||
[27987.918231] smu_suspend+0x77/0xe0 [amdgpu]
|
||||
[27987.918344] amdgpu_ip_block_suspend+0x24/0x40 [amdgpu]
|
||||
[27987.918461] amdgpu_device_ip_suspend_phase2+0x125/0x340 [amdgpu]
|
||||
[27987.918568] amdgpu_device_suspend+0xcf/0x170 [amdgpu]
|
||||
[27987.918674] amdgpu_pmops_runtime_suspend+0xb9/0x1a0 [amdgpu]
|
||||
[27987.918778] pci_pm_runtime_suspend+0x67/0x1a0
|
||||
[27987.918782] ? __pfx_pci_pm_runtime_suspend+0x10/0x10
|
||||
[27987.918784] __rpm_callback+0x41/0x170
|
||||
[27987.918787] ? __pfx_pci_pm_runtime_suspend+0x10/0x10
|
||||
[27987.918790] rpm_callback+0x55/0x60
|
||||
[27987.918792] ? __pfx_pci_pm_runtime_suspend+0x10/0x10
|
||||
[27987.918794] rpm_suspend+0xe6/0x5f0
|
||||
[27987.918795] ? srso_alias_return_thunk+0x5/0xfbef5
|
||||
[27987.918797] ? finish_task_switch.isra.0+0x99/0x2c0
|
||||
[27987.918802] pm_runtime_work+0x98/0xb0
|
||||
[27987.918805] process_one_work+0x176/0x330
|
||||
[27987.918808] worker_thread+0x252/0x390
|
||||
[27987.918811] ? __pfx_worker_thread+0x10/0x10
|
||||
[27987.918813] kthread+0xcf/0x100
|
||||
[27987.918816] ? __pfx_kthread+0x10/0x10
|
||||
[27987.918818] ret_from_fork+0x31/0x50
|
||||
[27987.918821] ? __pfx_kthread+0x10/0x10
|
||||
[27987.918823] ret_from_fork_asm+0x1a/0x30
|
||||
[27987.918828] </TASK>
|
||||
[27987.918829] ---[ end trace 0000000000000000 ]---
|
||||
[27987.918833] amdgpu 0000:03:00.0: amdgpu: Fail to disable thermal alert!
|
||||
[27987.918836] amdgpu 0000:03:00.0: amdgpu: suspend of IP block <smu> failed -22
|
||||
[27987.918851] amdgpu 0000:03:00.0: amdgpu: SMU: response:0xFFFFFFFF for index:46 param:0x00000000 message:PrepareMp1ForUnload?
|
||||
[27987.918853] amdgpu 0000:03:00.0: amdgpu: [PrepareMp1] Failed!
|
||||
[27987.918855] [drm:amdgpu_device_ip_suspend_phase2 [amdgpu]] *ERROR* SMC failed to set mp1 state 2, -121
|
||||
[28270.864511] amdgpu 0000:03:00.0: amdgpu: PSP is resuming...
|
||||
[28271.140289] amdgpu 0000:03:00.0: amdgpu: PSP create ring failed!
|
||||
[28271.140299] amdgpu 0000:03:00.0: amdgpu: PSP resume failed
|
||||
[28271.140303] amdgpu 0000:03:00.0: amdgpu: resume of IP block <psp> failed -62
|
||||
[28271.140309] amdgpu 0000:03:00.0: amdgpu: amdgpu_device_ip_resume failed (-62).
|
||||
```
|
||||
1
active/device_framework_16/framework_16.md
Normal file
1
active/device_framework_16/framework_16.md
Normal file
@@ -0,0 +1 @@
|
||||
# Framework Laptop 16
|
||||
338
active/device_home_assistant/home_assistant.md
Normal file
338
active/device_home_assistant/home_assistant.md
Normal file
@@ -0,0 +1,338 @@
|
||||
# Home Assistant
|
||||
|
||||
- [Home Assistant](#home-assistant)
|
||||
- [Setup and Configuration](#setup-and-configuration)
|
||||
- [Schlage Door Lock](#schlage-door-lock)
|
||||
- [Philips Hue Lights](#philips-hue-lights)
|
||||
- [Shelly](#shelly)
|
||||
- [Relative Humidity Calculator](#relative-humidity-calculator)
|
||||
- [Font Colors](#font-colors)
|
||||
- [Light Indicator for Voice Assistant](#light-indicator-for-voice-assistant)
|
||||
- [Blank Button (Spacer)](#blank-button-spacer)
|
||||
- [Roku Remote](#roku-remote)
|
||||
|
||||
## Setup and Configuration
|
||||
|
||||
### Schlage Door Lock
|
||||
|
||||
1. Install Z-wave
|
||||
2. Install z-wave JS module
|
||||
3. Add device -> How do you want to add your device -> Legacy Secure
|
||||
4. Disconnect and Reconnect the battery on the lock
|
||||
5. Press and hold the zwave button until the light turns solid red, release and it should flash red
|
||||
1. (OR) Enter programming pin on lock -> 0 (this may take a few attempts, don't click the pair button)
|
||||
|
||||
If the lock ever disconnects you can safely delete it from home assistant and re-interview. It will
|
||||
set back up with the correct entity IDs and automations/dashboards will work just fine.
|
||||
|
||||
### Philips Hue Lights
|
||||
|
||||
1. I configure all philips hue lights through zigbee directly connected to HA
|
||||
|
||||
hue lights support color_temp in mireds, here are some mired-kelvin conversions:
|
||||
|
||||
| Kelvin | Mired |
|
||||
| ------ | ----- |
|
||||
| 6000 | 167 |
|
||||
| 4000 | 250 |
|
||||
| 2600 | 385 |
|
||||
|
||||
### Shelly
|
||||
|
||||
1. Outbound Websocket `wss://homeassistant.reeseapps.com/api/shelly/ws`
|
||||
|
||||
Shelly devices can act as "passive" or "active" bluetooth scanners. Both of these configurations
|
||||
allow home assistant to proxy bluetooth connections through shelly devices, significantly extending
|
||||
the range of your home assistant's bluetooth capabilities. Active scanning uses more power but
|
||||
is quicker to pick up and transmit device information. Note that "gateway mode" is not required,
|
||||
just enable bluetooth and rpc or select "active" from the configuration menu for the shelly
|
||||
device.
|
||||
|
||||
### Relative Humidity Calculator
|
||||
|
||||
<https://www.wikihow.com/Calculate-Humidity>
|
||||
|
||||
You can calculate the relative humidity of the outdoor air if warmed to indoor temperatures like so:
|
||||
|
||||
```jinja
|
||||
{% set dew_point = state_attr("weather.forecast_home", "dew_point") %}
|
||||
{% set air_temp_f = state_attr("climate.ecobee_thermostat", "current_temperature") %}
|
||||
{% set air_temp = (5/9)*(air_temp_f-32) %}
|
||||
{% set sat_vap_press = 6.11 * 10**((7.5*air_temp) / (237.3+air_temp)) %}
|
||||
{% set act_vap_press = 6.11 * 10**((7.5*dew_point) / (237.3+dew_point)) %}
|
||||
{% set rel_hum = 100*(act_vap_press / sat_vap_press) %}
|
||||
|
||||
{{ dew_point }}
|
||||
{{ air_temp }}
|
||||
{{ sat_vap_press }}
|
||||
{{ act_vap_press }}
|
||||
{{ rel_hum }}
|
||||
```
|
||||
|
||||
### Font Colors
|
||||
|
||||
```html
|
||||
<font color = {{ "green" if state_attr("climate.ecobee_thermostat", "current_humidity") > low_humidity and state_attr("climate.ecobee_thermostat", "current_humidity") < high_humidity else "red" }}>
|
||||
HVAC Humidity: {{ state_attr("climate.ecobee_thermostat", "current_humidity") }}%
|
||||
</font>
|
||||
```
|
||||
|
||||
### Light Indicator for Voice Assistant
|
||||
|
||||
```yaml
|
||||
alias: Flash Lights on Bedroom Voice Assistant Start
|
||||
description: ""
|
||||
triggers:
|
||||
- type: turned_on
|
||||
device_id: d50fa1ae499e88bf37225c7e82ed189b
|
||||
entity_id: 7ab2896ca3a55efd2e0ee9bba91fdf68
|
||||
domain: binary_sensor
|
||||
metadata:
|
||||
secondary: false
|
||||
trigger: device
|
||||
conditions: []
|
||||
actions:
|
||||
- action: scene.create
|
||||
metadata: {}
|
||||
data:
|
||||
scene_id: bedroombeforescene
|
||||
snapshot_entities:
|
||||
- light.main_bedroom_lamps
|
||||
- action: light.turn_on
|
||||
metadata: {}
|
||||
data:
|
||||
transition: 0.25
|
||||
brightness_step_pct: 5
|
||||
target:
|
||||
entity_id: light.main_bedroom_lamps
|
||||
- delay:
|
||||
hours: 0
|
||||
minutes: 0
|
||||
seconds: 0
|
||||
milliseconds: 250
|
||||
- action: scene.turn_on
|
||||
data:
|
||||
entity_id: scene.bedroombeforescene
|
||||
transition: 0.25
|
||||
mode: single
|
||||
```
|
||||
|
||||
### Blank Button (Spacer)
|
||||
|
||||
```yaml
|
||||
- type: button
|
||||
tap_action:
|
||||
action: none
|
||||
show_state: false
|
||||
show_name: false
|
||||
show_icon: false
|
||||
hold_action:
|
||||
action: none
|
||||
```
|
||||
|
||||
### Roku Remote
|
||||
|
||||
```yaml
|
||||
type: vertical-stack
|
||||
cards:
|
||||
- type: entity
|
||||
entity: select.left_living_room_application
|
||||
- square: true
|
||||
type: grid
|
||||
cards:
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: power
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:power
|
||||
name: power
|
||||
show_state: false
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: volume_down
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:volume-minus
|
||||
name: volume down / hold mute
|
||||
show_state: false
|
||||
hold_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: volume_mute
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: volume_up
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:volume-plus
|
||||
name: volume up / hold mute
|
||||
show_state: false
|
||||
"hold_action:":
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: volume_mute
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: back
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:undo
|
||||
name: back
|
||||
show_state: false
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: up
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:arrow-up-bold
|
||||
name: up
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: home
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:home
|
||||
name: home
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: left
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:arrow-left-bold
|
||||
name: left
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: select
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:select-all
|
||||
name: select
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: right
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:arrow-right-bold
|
||||
name: right
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: play
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:play-pause
|
||||
name: play/pause
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: down
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:arrow-down-bold
|
||||
name: down
|
||||
hold_action:
|
||||
action: none
|
||||
- show_name: true
|
||||
show_icon: true
|
||||
type: button
|
||||
tap_action:
|
||||
action: perform-action
|
||||
perform_action: remote.send_command
|
||||
target:
|
||||
entity_id: remote.left_living_room
|
||||
data:
|
||||
command: info
|
||||
entity: remote.left_living_room
|
||||
icon: mdi:wrench
|
||||
name: settings
|
||||
hold_action:
|
||||
action: none
|
||||
title: Left Living Room TV
|
||||
```
|
||||
18
active/device_shelly/shelly.md
Normal file
18
active/device_shelly/shelly.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Shelly Devices
|
||||
|
||||
## Shelly Plug US
|
||||
|
||||
1. Connect to WiFi
|
||||
2. Set password for AP
|
||||
3. Disable AP
|
||||
4. Set password for device authentication
|
||||
5. Set Restore last known state of output/relay
|
||||
6. Set Device Name
|
||||
7. Enable Bluetooth Gateway
|
||||
8. Update Firmware
|
||||
|
||||
### Reset
|
||||
|
||||
Resetting is super finnicky. You'll need to plug it in, press and hold the power button until the
|
||||
red light flashes quickly (not slowly, that's a reboot). You'll probably have to do it multiple
|
||||
times because they seem to reboot halfway through the reset process.
|
||||
7
active/device_yubikey/yubikey.md
Normal file
7
active/device_yubikey/yubikey.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Yubikey
|
||||
|
||||
## Configuration
|
||||
|
||||
1. You will likely need the [udev
|
||||
rules](https://support.yubico.com/hc/en-us/articles/360013708900-Using-Your-YubiKey-with-Linux)
|
||||
to use the AppImage configuration tool on linux even if your udev version is above 244.
|
||||
3
active/kubernetes/kubernetes.md
Normal file
3
active/kubernetes/kubernetes.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Kubernetes
|
||||
|
||||
See [k3s](/active/systemd_k3s/k3s.md)
|
||||
29
active/kubernetes_bitwarden/bitwarden.md
Normal file
29
active/kubernetes_bitwarden/bitwarden.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Bitwarden
|
||||
|
||||
- [Bitwarden](#bitwarden)
|
||||
- [Install](#install)
|
||||
|
||||
## Install
|
||||
|
||||
<https://bitwarden.com/help/self-host-with-helm/>
|
||||
|
||||
```bash
|
||||
helm repo add bitwarden https://charts.bitwarden.com/
|
||||
helm repo update
|
||||
|
||||
kubectl create namespace bitwarden
|
||||
helm show values bitwarden/self-host > active/kubernetes_bitwarden/values.yaml
|
||||
|
||||
# Installation ID: https://bitwarden.com/host/
|
||||
# Optional argument for Have I Been Pwned: --from-literal=globalSettings__hibpApiKey="REPLACE" \
|
||||
kubectl create secret generic custom-secret -n bitwarden \
|
||||
--from-file=globalSettings__installation__id=./secrets/bitwarden/installation_id \
|
||||
--from-file=globalSettings__installation__key=./secrets/bitwarden/installation_key \
|
||||
--from-file=globalSettings__mail__smtp__username=./secrets/bitwarden/smtp_username \
|
||||
--from-file=globalSettings__mail__smtp__password=./secrets/bitwarden/smtp_password \
|
||||
--from-file=globalSettings__yubico__clientId=./secrets/bitwarden/yubico_id \
|
||||
--from-file=globalSettings__yubico__key=./secrets/bitwarden/yubico_secret \
|
||||
--from-file=SA_PASSWORD=./secrets/bitwarden/sa_password
|
||||
|
||||
helm upgrade bitwarden bitwarden/self-host --install --namespace bitwarden --values active/kubernetes_bitwarden/values.yaml
|
||||
```
|
||||
427
active/kubernetes_bitwarden/values.yaml
Normal file
427
active/kubernetes_bitwarden/values.yaml
Normal file
@@ -0,0 +1,427 @@
|
||||
# Optional - Override the chart name if desired
|
||||
fullnameOverride: ""
|
||||
nameOverride: ""
|
||||
|
||||
general:
|
||||
# Domain name for the service
|
||||
domain: bitwarden.reeseapps.com
|
||||
ingress:
|
||||
# Set to false if using a custom ingress
|
||||
enabled: true
|
||||
# Current supported values for ingress type include: nginx
|
||||
className: "nginx"
|
||||
## - Annotations to add to the Ingress resource.
|
||||
annotations:
|
||||
# nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
|
||||
# nginx.ingress.kubernetes.io/use-regex: "true"
|
||||
# nginx.ingress.kubernetes.io/rewrite-target: /$1
|
||||
## - Labels to add to the Ingress resource
|
||||
labels: {}
|
||||
# Certificate options
|
||||
tls:
|
||||
# TLS certificate secret name
|
||||
name: bitwarden
|
||||
# Cluster cert issuer (ex. Let's Encrypt) name if one exists
|
||||
clusterIssuer: letsencrypt
|
||||
# Ingress path configuration - The samples provided are for Nginx. Examples with other Ingress providers are in the chart Readme on GitHub
|
||||
paths:
|
||||
web:
|
||||
path: /(.*)
|
||||
pathType: ImplementationSpecific
|
||||
attachments:
|
||||
path: /attachments/(.*)
|
||||
pathType: ImplementationSpecific
|
||||
api:
|
||||
path: /api/(.*)
|
||||
pathType: ImplementationSpecific
|
||||
icons:
|
||||
path: /icons/(.*)
|
||||
pathType: ImplementationSpecific
|
||||
notifications:
|
||||
path: /notifications/(.*)
|
||||
pathType: ImplementationSpecific
|
||||
events:
|
||||
path: /events/(.*)
|
||||
pathType: ImplementationSpecific
|
||||
scim:
|
||||
path: /scim/(.*)
|
||||
pathType: ImplementationSpecific
|
||||
sso:
|
||||
path: /(sso/.*)
|
||||
pathType: ImplementationSpecific
|
||||
identity:
|
||||
path: /(identity/.*)
|
||||
pathType: ImplementationSpecific
|
||||
admin:
|
||||
path: /(admin/?.*)
|
||||
pathType: ImplementationSpecific
|
||||
|
||||
# Override application container version tags. Theses will be versioned with the chart and need not be changed in most circumstances.
|
||||
coreVersionOverride: ""
|
||||
webVersionOverride: ""
|
||||
# Disable account registration on web vault
|
||||
disableUserRegistration: "false"
|
||||
# Comma-separated list of email addresses for Admin users
|
||||
admins: "false"
|
||||
email:
|
||||
# Email address used for invitations, typically no-reply@smtp-host
|
||||
replyToEmail: "no-reply@reeseapps.com"
|
||||
# Your SMTP server hostname (recommended) or IP address
|
||||
smtpHost: "email-smtp.us-east-1.amazonaws.com"
|
||||
# The SMTP port used by the SMTP server
|
||||
smtpPort: "587"
|
||||
# Whether your SMTP server uses an encryption protocol, "true" for SSL, "false" for TLS
|
||||
smtpSsl: "true"
|
||||
# Custom labels to add throughout the installation
|
||||
labels: {}
|
||||
# Specifies the access mode for persistent volume claims. This should not be changed in most cases, and the allowable
|
||||
# values are only ReadWriteMany and ReadWriteOnce. Please read https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
|
||||
# to better understand these options before changing this value.
|
||||
volumeAccessMode: "ReadWriteMany"
|
||||
# Set to true to allow communication between your server and our cloud system. Doing so enables billing and license sync. Please see: https://bitwarden.com/help/self-host-an-organization/#step-4-setup-billing-and-license-sync
|
||||
enableCloudCommunication: false
|
||||
# Cloud region for sync. Please see: https://bitwarden.com/help/families-for-enterprise-self-hosted/#step-1-enable-cloud-communication
|
||||
cloudRegion: US
|
||||
|
||||
# Specify the name of the shared storage class
|
||||
# This storage class requires ReadWriteMany. You will need to provide your own storage class. Storage classes with automatic volume previsioners are recommended.
|
||||
sharedStorageClassName: "ssd"
|
||||
|
||||
# Secrets are required. Review the chart README on GitHub for details on creating these secrets
|
||||
secrets:
|
||||
secretName: custom-secret
|
||||
# Secrets can either be created via 'kubectl create secrets' or by using a secret provider class. If using the latter, specify the name here
|
||||
secretProviderClass:
|
||||
|
||||
# Settings for individual service containers
|
||||
component:
|
||||
# The Admin component
|
||||
admin:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/admin
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
api:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/api
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "200m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
attachments:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/attachments
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
events:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/events
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
icons:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/icons
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
identity:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/identity
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "200m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
notifications:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/notifications
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
scim:
|
||||
# SCIM is disabled by default. To use this service, enable it below and set an appropriate Ingress path
|
||||
enabled: false
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/scim
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
sso:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/sso
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
web:
|
||||
# Additional deployment labels
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: bitwarden/web
|
||||
# The pod resource requirements. You can adjust these up and down for your environment
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "50m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
|
||||
deploymentStrategy: RollingUpdate
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
# Images used for jobs and init containers
|
||||
supportComponents:
|
||||
dbMigrator:
|
||||
image:
|
||||
name: bitwarden/mssqlmigratorutility
|
||||
certGenerator:
|
||||
image:
|
||||
name: docker.io/nginx
|
||||
tag: 1.25.3
|
||||
kubectl:
|
||||
image:
|
||||
name: bitnami/kubectl
|
||||
tag: 1.21
|
||||
# Data volume sizes for shared PVCs
|
||||
volume:
|
||||
dataprotection:
|
||||
# Use an existing PVC by specifying the name.
|
||||
# existingClaim: claimName
|
||||
# Override the accessMode specified in general.volumeAccessMode
|
||||
# accessMode: ReadWriteOnce
|
||||
# Override the storageClass specified in sharedStorageClassName
|
||||
# storageClass: "shared-storage"
|
||||
size: "1Gi"
|
||||
labels: {}
|
||||
attachments:
|
||||
# Use an existing PVC by specifying the name.
|
||||
# existingClaim: claimName
|
||||
# Override the accessMode specified in general.volumeAccessMode
|
||||
# accessMode: ReadWriteOnce
|
||||
# Override the storageClass specified in sharedStorageClassName
|
||||
# storageClass: "shared-storage"
|
||||
size: 1Gi
|
||||
labels: {}
|
||||
licenses:
|
||||
# Use an existing PVC by specifying the name.
|
||||
# existingClaim: claimName
|
||||
# Override the accessMode specified in general.volumeAccessMode
|
||||
# accessMode: ReadWriteOnce
|
||||
# Override the storageClass specified in sharedStorageClassName
|
||||
# storageClass: "shared-storage"
|
||||
size: 1Gi
|
||||
labels: {}
|
||||
logs:
|
||||
# Logs are disabled by default
|
||||
enabled: false
|
||||
# Use an existing PVC by specifying the name.
|
||||
# existingClaim: claimName
|
||||
# Override the accessMode specified in general.volumeAccessMode
|
||||
# accessMode: ReadWriteOnce
|
||||
# Override the storageClass specified in sharedStorageClassName
|
||||
# storageClass: "shared-storage"
|
||||
size: 1Gi
|
||||
labels: {}
|
||||
|
||||
#
|
||||
# Configure service account for pre- and post-install hooks
|
||||
#
|
||||
serviceAccount:
|
||||
name: service-account
|
||||
# Certain instances will prequire the creation of a pre-deployed service account. For instance, AWS IAM enabled service accounts need to be created outside
|
||||
# of the chart to allow for setting of permissions on other AWS services like Secrets Manager
|
||||
deployRolesOnly: false
|
||||
|
||||
#
|
||||
# Configure database
|
||||
#
|
||||
database:
|
||||
# deploy the database pod? if false, a connection string to a SQL Server will need to be provided through one of the configuration providers.
|
||||
enabled: true
|
||||
# Labels to add to the MSSQL deployment
|
||||
labels: {}
|
||||
# Image name, tag, and pull policy
|
||||
image:
|
||||
name: mcr.microsoft.com/mssql/server
|
||||
# Tag of the image to use. (Defaults to general.coreVersion)
|
||||
tag: 2022-CU13-ubuntu-22.04
|
||||
# The container is limited to the resources below. Adjust for your environment.
|
||||
resources:
|
||||
requests:
|
||||
memory: "2G"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "2G"
|
||||
cpu: "500m"
|
||||
|
||||
# The MSSQL volumes for the PVCs
|
||||
volume:
|
||||
backups:
|
||||
# Use an existing PVC by specifying the name.
|
||||
# existingClaim: claimName
|
||||
# Override the accessMode specified in general.volumeAccessMode
|
||||
# accessMode: ReadWriteOnce
|
||||
# Override the storageClass specified in sharedStorageClassName
|
||||
# storageClass: "shared-storage"
|
||||
size: 1Gi
|
||||
labels: {}
|
||||
data:
|
||||
# Use an existing PVC by specifying the name.
|
||||
# existingClaim: claimName
|
||||
# Override the accessMode specified in general.volumeAccessMode
|
||||
# accessMode: ReadWriteOnce
|
||||
# Override the storageClass specified in sharedStorageClassName
|
||||
# storageClass: "shared-storage"
|
||||
size: 10Gi
|
||||
labels: {}
|
||||
log:
|
||||
# Use an existing PVC by specifying the name.
|
||||
# existingClaim: claimName
|
||||
# Override the accessMode specified in general.volumeAccessMode
|
||||
# accessMode: ReadWriteOnce
|
||||
# Override the storageClass specified in sharedStorageClassName
|
||||
# storageClass: "shared-storage"
|
||||
size: 10Gi
|
||||
labels: {}
|
||||
# Specifies the strategy used to replace old Pods by new ones. The value can be "OnDelete" or "RollingUpdate". "OnDelete" is the default value.
|
||||
# When set to OnDelete, the SQL pod will not be recreated automatically on update. It must be manually deleted to update.
|
||||
updateStrategy: OnDelete
|
||||
# This will set the Kubernetes container security context
|
||||
securityContext:
|
||||
# Run the pod under a service account you create. This is especially useful for OpenShift deployments
|
||||
podServiceAccount:
|
||||
|
||||
# You can specify raw Kubernetes manifests that will be applied before or after the base Helm install.
|
||||
# Please see the chart README in GitHub for more information and examples.
|
||||
rawManifests:
|
||||
preInstall: []
|
||||
postInstall: []
|
||||
387
active/kubernetes_coredns/values.yaml
Normal file
387
active/kubernetes_coredns/values.yaml
Normal file
@@ -0,0 +1,387 @@
|
||||
# Default values for coredns.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
image:
|
||||
repository: coredns/coredns
|
||||
# Overrides the image tag whose default is the chart appVersion.
|
||||
tag: ""
|
||||
pullPolicy: IfNotPresent
|
||||
## Optionally specify an array of imagePullSecrets.
|
||||
## Secrets must be manually created in the namespace.
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
|
||||
##
|
||||
pullSecrets: []
|
||||
# pullSecrets:
|
||||
# - name: myRegistryKeySecretName
|
||||
|
||||
replicaCount: 1
|
||||
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
maxSurge: 25%
|
||||
|
||||
terminationGracePeriodSeconds: 30
|
||||
|
||||
podAnnotations: {}
|
||||
# cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
|
||||
|
||||
serviceType: "ClusterIP"
|
||||
|
||||
prometheus:
|
||||
service:
|
||||
enabled: false
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/port: "9153"
|
||||
selector: {}
|
||||
monitor:
|
||||
enabled: false
|
||||
additionalLabels: {}
|
||||
namespace: ""
|
||||
interval: ""
|
||||
selector: {}
|
||||
|
||||
service:
|
||||
clusterIP: fd02:c91e:56f5::10
|
||||
clusterIPs:
|
||||
- fd02:c91e:56f5::10
|
||||
- 10.43.0.10
|
||||
# loadBalancerIP: ""
|
||||
# externalIPs: []
|
||||
# externalTrafficPolicy: ""
|
||||
ipFamilyPolicy: "RequireDualStack"
|
||||
# The name of the Service
|
||||
# If not set, a name is generated using the fullname template
|
||||
name: ""
|
||||
annotations: {}
|
||||
# Pod selector
|
||||
selector: {}
|
||||
|
||||
serviceAccount:
|
||||
create: false
|
||||
# The name of the ServiceAccount to use
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
name: ""
|
||||
annotations: {}
|
||||
|
||||
rbac:
|
||||
# If true, create & use RBAC resources
|
||||
create: true
|
||||
# If true, create and use PodSecurityPolicy
|
||||
pspEnable: false
|
||||
# The name of the ServiceAccount to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
# name:
|
||||
|
||||
# isClusterService specifies whether chart should be deployed as cluster-service or normal k8s app.
|
||||
isClusterService: true
|
||||
|
||||
# Optional priority class to be used for the coredns pods. Used for autoscaler if autoscaler.priorityClassName not set.
|
||||
priorityClassName: ""
|
||||
|
||||
# Configure the pod level securityContext.
|
||||
podSecurityContext: {}
|
||||
|
||||
# Configure SecurityContext for Pod.
|
||||
# Ensure that required linux capability to bind port number below 1024 is assigned (`CAP_NET_BIND_SERVICE`).
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
|
||||
# Default zone is what Kubernetes recommends:
|
||||
# https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options
|
||||
servers:
|
||||
- zones:
|
||||
- zone: .
|
||||
port: 53
|
||||
# If serviceType is nodePort you can specify nodePort here
|
||||
# nodePort: 30053
|
||||
# hostPort: 53
|
||||
plugins:
|
||||
- name: errors
|
||||
# Serves a /health endpoint on :8080, required for livenessProbe
|
||||
- name: health
|
||||
configBlock: |-
|
||||
lameduck 5s
|
||||
# Serves a /ready endpoint on :8181, required for readinessProbe
|
||||
- name: ready
|
||||
# Required to query kubernetes API for data
|
||||
- name: kubernetes
|
||||
parameters: cluster.local in-addr.arpa ip6.arpa
|
||||
configBlock: |-
|
||||
pods insecure
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
ttl 30
|
||||
# Serves a /metrics endpoint on :9153, required for serviceMonitor
|
||||
- name: prometheus
|
||||
parameters: 0.0.0.0:9153
|
||||
- name: forward
|
||||
parameters: . 2606:4700:4700::1111
|
||||
- name: cache
|
||||
parameters: 30
|
||||
- name: loop
|
||||
- name: reload
|
||||
- name: loadbalance
|
||||
|
||||
# Complete example with all the options:
|
||||
# - zones: # the `zones` block can be left out entirely, defaults to "."
|
||||
# - zone: hello.world. # optional, defaults to "."
|
||||
# scheme: tls:// # optional, defaults to "" (which equals "dns://" in CoreDNS)
|
||||
# - zone: foo.bar.
|
||||
# scheme: dns://
|
||||
# use_tcp: true # set this parameter to optionally expose the port on tcp as well as udp for the DNS protocol
|
||||
# # Note that this will not work if you are also exposing tls or grpc on the same server
|
||||
# port: 12345 # optional, defaults to "" (which equals 53 in CoreDNS)
|
||||
# plugins: # the plugins to use for this server block
|
||||
# - name: kubernetes # name of plugin, if used multiple times ensure that the plugin supports it!
|
||||
# parameters: foo bar # list of parameters after the plugin
|
||||
# configBlock: |- # if the plugin supports extra block style config, supply it here
|
||||
# hello world
|
||||
# foo bar
|
||||
|
||||
# Extra configuration that is applied outside of the default zone block.
|
||||
# Example to include additional config files, which may come from extraVolumes:
|
||||
# extraConfig:
|
||||
# import:
|
||||
# parameters: /opt/coredns/*.conf
|
||||
extraConfig: {}
|
||||
|
||||
# To use the livenessProbe, the health plugin needs to be enabled in CoreDNS' server config
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 5
|
||||
successThreshold: 1
|
||||
# To use the readinessProbe, the ready plugin needs to be enabled in CoreDNS' server config
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 5
|
||||
successThreshold: 1
|
||||
|
||||
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
|
||||
# for example:
|
||||
# affinity:
|
||||
# nodeAffinity:
|
||||
# requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# nodeSelectorTerms:
|
||||
# - matchExpressions:
|
||||
# - key: foo.bar.com/role
|
||||
# operator: In
|
||||
# values:
|
||||
# - master
|
||||
affinity: {}
|
||||
|
||||
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#topologyspreadconstraint-v1-core
|
||||
# and supports Helm templating.
|
||||
# For example:
|
||||
# topologySpreadConstraints:
|
||||
# - labelSelector:
|
||||
# matchLabels:
|
||||
# app.kubernetes.io/name: '{{ template "coredns.name" . }}'
|
||||
# app.kubernetes.io/instance: '{{ .Release.Name }}'
|
||||
# topologyKey: topology.kubernetes.io/zone
|
||||
# maxSkew: 1
|
||||
# whenUnsatisfiable: ScheduleAnyway
|
||||
# - labelSelector:
|
||||
# matchLabels:
|
||||
# app.kubernetes.io/name: '{{ template "coredns.name" . }}'
|
||||
# app.kubernetes.io/instance: '{{ .Release.Name }}'
|
||||
# topologyKey: kubernetes.io/hostname
|
||||
# maxSkew: 1
|
||||
# whenUnsatisfiable: ScheduleAnyway
|
||||
topologySpreadConstraints: []
|
||||
|
||||
# Node labels for pod assignment
|
||||
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
|
||||
nodeSelector: {}
|
||||
|
||||
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
|
||||
# for example:
|
||||
# tolerations:
|
||||
# - key: foo.bar.com/role
|
||||
# operator: Equal
|
||||
# value: master
|
||||
# effect: NoSchedule
|
||||
tolerations: []
|
||||
|
||||
# https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
|
||||
podDisruptionBudget: {}
|
||||
|
||||
# configure custom zone files as per https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
|
||||
zoneFiles: []
|
||||
# - filename: example.db
|
||||
# domain: example.com
|
||||
# contents: |
|
||||
# example.com. IN SOA sns.dns.icann.com. noc.dns.icann.com. 2015082541 7200 3600 1209600 3600
|
||||
# example.com. IN NS b.iana-servers.net.
|
||||
# example.com. IN NS a.iana-servers.net.
|
||||
# example.com. IN A 192.168.99.102
|
||||
# *.example.com. IN A 192.168.99.102
|
||||
|
||||
# optional array of sidecar containers
|
||||
extraContainers: []
|
||||
# - name: some-container-name
|
||||
# image: some-image:latest
|
||||
# imagePullPolicy: Always
|
||||
# optional array of extra volumes to create
|
||||
extraVolumes: []
|
||||
# - name: some-volume-name
|
||||
# emptyDir: {}
|
||||
# optional array of mount points for extraVolumes
|
||||
extraVolumeMounts: []
|
||||
# - name: some-volume-name
|
||||
# mountPath: /etc/wherever
|
||||
|
||||
# optional array of secrets to mount inside coredns container
|
||||
# possible usecase: need for secure connection with etcd backend
|
||||
extraSecrets: []
|
||||
# - name: etcd-client-certs
|
||||
# mountPath: /etc/coredns/tls/etcd
|
||||
# defaultMode: 420
|
||||
# - name: some-fancy-secret
|
||||
# mountPath: /etc/wherever
|
||||
# defaultMode: 440
|
||||
|
||||
# To support legacy deployments using CoreDNS with the "k8s-app: kube-dns" label selectors.
|
||||
# See https://github.com/coredns/helm/blob/master/charts/coredns/README.md#adopting-existing-coredns-resources
|
||||
# k8sAppLabelOverride: "kube-dns"
|
||||
|
||||
# Custom labels to apply to Deployment, Pod, Configmap, Service, ServiceMonitor. Including autoscaler if enabled.
|
||||
customLabels: {}
|
||||
|
||||
# Custom annotations to apply to Deployment, Pod, Configmap, Service, ServiceMonitor. Including autoscaler if enabled.
|
||||
customAnnotations: {}
|
||||
|
||||
## Alternative configuration for HPA deployment if wanted
|
||||
## Create HorizontalPodAutoscaler object.
|
||||
##
|
||||
# hpa:
|
||||
# enabled: false
|
||||
# minReplicas: 1
|
||||
# maxReplicas: 10
|
||||
# metrics:
|
||||
# metrics:
|
||||
# - type: Resource
|
||||
# resource:
|
||||
# name: memory
|
||||
# target:
|
||||
# type: Utilization
|
||||
# averageUtilization: 60
|
||||
# - type: Resource
|
||||
# resource:
|
||||
# name: cpu
|
||||
# target:
|
||||
# type: Utilization
|
||||
# averageUtilization: 60
|
||||
|
||||
hpa:
|
||||
enabled: false
|
||||
minReplicas: 1
|
||||
maxReplicas: 2
|
||||
metrics: []
|
||||
|
||||
## Configue a cluster-proportional-autoscaler for coredns
|
||||
# See https://github.com/kubernetes-incubator/cluster-proportional-autoscaler
|
||||
autoscaler:
|
||||
# Enabled the cluster-proportional-autoscaler
|
||||
enabled: false
|
||||
|
||||
# Number of cores in the cluster per coredns replica
|
||||
coresPerReplica: 256
|
||||
# Number of nodes in the cluster per coredns replica
|
||||
nodesPerReplica: 16
|
||||
# Min size of replicaCount
|
||||
min: 0
|
||||
# Max size of replicaCount (default of 0 is no max)
|
||||
max: 0
|
||||
# Whether to include unschedulable nodes in the nodes/cores calculations - this requires version 1.8.0+ of the autoscaler
|
||||
includeUnschedulableNodes: false
|
||||
# If true does not allow single points of failure to form
|
||||
preventSinglePointFailure: true
|
||||
|
||||
# Annotations for the coredns proportional autoscaler pods
|
||||
podAnnotations: {}
|
||||
|
||||
## Optionally specify some extra flags to pass to cluster-proprtional-autoscaler.
|
||||
## Useful for e.g. the nodelabels flag.
|
||||
# customFlags:
|
||||
# - --nodelabels=topology.kubernetes.io/zone=us-east-1a
|
||||
|
||||
image:
|
||||
repository: registry.k8s.io/cpa/cluster-proportional-autoscaler
|
||||
tag: "1.8.5"
|
||||
pullPolicy: IfNotPresent
|
||||
## Optionally specify an array of imagePullSecrets.
|
||||
## Secrets must be manually created in the namespace.
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
|
||||
##
|
||||
pullSecrets: []
|
||||
# pullSecrets:
|
||||
# - name: myRegistryKeySecretName
|
||||
|
||||
# Optional priority class to be used for the autoscaler pods. priorityClassName used if not set.
|
||||
priorityClassName: ""
|
||||
|
||||
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
|
||||
affinity: {}
|
||||
|
||||
# Node labels for pod assignment
|
||||
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
|
||||
nodeSelector: {}
|
||||
|
||||
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
|
||||
tolerations: []
|
||||
|
||||
# resources for autoscaler pod
|
||||
resources:
|
||||
requests:
|
||||
cpu: "20m"
|
||||
memory: "10Mi"
|
||||
limits:
|
||||
cpu: "20m"
|
||||
memory: "10Mi"
|
||||
|
||||
# Options for autoscaler configmap
|
||||
configmap:
|
||||
## Annotations for the coredns-autoscaler configmap
|
||||
# i.e. strategy.spinnaker.io/versioned: "false" to ensure configmap isn't renamed
|
||||
annotations: {}
|
||||
|
||||
# Enables the livenessProbe for cluster-proportional-autoscaler - this requires version 1.8.0+ of the autoscaler
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 3
|
||||
successThreshold: 1
|
||||
|
||||
# optional array of sidecar containers
|
||||
extraContainers: []
|
||||
# - name: some-container-name
|
||||
# image: some-image:latest
|
||||
# imagePullPolicy: Always
|
||||
|
||||
deployment:
|
||||
skipConfig: false
|
||||
enabled: true
|
||||
name: ""
|
||||
## Annotations for the coredns deployment
|
||||
annotations: {}
|
||||
## Pod selector
|
||||
selector: {}
|
||||
80
active/kubernetes_external-dns/deploy.yaml
Normal file
80
active/kubernetes_external-dns/deploy.yaml
Normal file
@@ -0,0 +1,80 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: external-dns
|
||||
labels:
|
||||
app.kubernetes.io/name: external-dns
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["services","endpoints","pods","nodes"]
|
||||
verbs: ["get","watch","list"]
|
||||
- apiGroups: ["extensions","networking.k8s.io"]
|
||||
resources: ["ingresses"]
|
||||
verbs: ["get","watch","list"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: external-dns-viewer
|
||||
labels:
|
||||
app.kubernetes.io/name: external-dns
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: external-dns
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: external-dns
|
||||
namespace: kube-system # change to desired namespace: externaldns, kube-addons
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: external-dns
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app.kubernetes.io/name: external-dns
|
||||
spec:
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: external-dns
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: external-dns
|
||||
spec:
|
||||
serviceAccountName: external-dns
|
||||
containers:
|
||||
- name: external-dns
|
||||
image: registry.k8s.io/external-dns/external-dns:v0.14.2
|
||||
args:
|
||||
- --source=service
|
||||
- --source=ingress
|
||||
- --domain-filter=reeseapps.com
|
||||
- --provider=aws
|
||||
- --aws-zone-type=public
|
||||
- --registry=txt
|
||||
# - --txt-owner-id=external-dns
|
||||
env:
|
||||
- name: AWS_DEFAULT_REGION
|
||||
value: us-east-1 # change to region where EKS is installed
|
||||
- name: AWS_SHARED_CREDENTIALS_FILE
|
||||
value: /.aws/externaldns-credentials
|
||||
volumeMounts:
|
||||
- name: aws-credentials
|
||||
mountPath: /.aws
|
||||
readOnly: true
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "500m"
|
||||
|
||||
volumes:
|
||||
- name: aws-credentials
|
||||
secret:
|
||||
secretName: external-dns
|
||||
8
active/kubernetes_external-dns/sa.yaml
Normal file
8
active/kubernetes_external-dns/sa.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
# comment out sa if it was previously created
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: external-dns
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app.kubernetes.io/name: external-dns
|
||||
15
active/kubernetes_external-dns/values.yaml
Normal file
15
active/kubernetes_external-dns/values.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
provider:
|
||||
name: aws
|
||||
env:
|
||||
- name: AWS_SHARED_CREDENTIALS_FILE
|
||||
value: /etc/aws/credentials/externaldns-credentials
|
||||
- name: AWS_DEFAULT_REGION
|
||||
value: us-east-1 # change to region where EKS is installed
|
||||
extraVolumes:
|
||||
- name: aws-credentials
|
||||
secret:
|
||||
secretName: external-dns # In this example, the secret will have the data stored in a key named `my_credentials`
|
||||
extraVolumeMounts:
|
||||
- name: aws-credentials
|
||||
mountPath: /etc/aws/credentials
|
||||
readOnly: true
|
||||
75
active/kubernetes_gitea/gitea-staging-values.yaml
Normal file
75
active/kubernetes_gitea/gitea-staging-values.yaml
Normal file
@@ -0,0 +1,75 @@
|
||||
image:
|
||||
tag: 1.21.4
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
kubernetes.io/ingress.class: nginx
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||
nginx.org/client-max-body-size: "0"
|
||||
hosts:
|
||||
- host: gitea-staging.reeseapps.com
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
tls:
|
||||
- hosts:
|
||||
- gitea-staging.reeseapps.com
|
||||
secretName: gitea-staging-tls-cert
|
||||
|
||||
persistence:
|
||||
enabled: true
|
||||
create: true
|
||||
storageClass: zfs-iscsi-enc0
|
||||
claimName: data-gitea-staging-0
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
|
||||
global:
|
||||
storageClass: zfs-iscsi-enc1
|
||||
|
||||
postgresql:
|
||||
enabled: true
|
||||
image:
|
||||
tag: 15
|
||||
primary:
|
||||
persistence:
|
||||
enabled: true
|
||||
storageClass: zfs-iscsi-enc1
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
|
||||
postgresql-ha:
|
||||
enabled: false
|
||||
|
||||
gitea:
|
||||
admin:
|
||||
existingSecret: gitea-admin-secret
|
||||
config:
|
||||
service:
|
||||
DISABLE_REGISTRATION: true
|
||||
|
||||
service:
|
||||
ssh:
|
||||
port: 22
|
||||
type: ClusterIP
|
||||
|
||||
redis-cluster:
|
||||
enabled: true
|
||||
image:
|
||||
tag: 7.2
|
||||
|
||||
deployment:
|
||||
tolerations:
|
||||
- key: "node.kubernetes.io/unreachable"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
|
||||
strategy:
|
||||
type: Recreate
|
||||
86
active/kubernetes_gitea/gitea-values.yaml
Normal file
86
active/kubernetes_gitea/gitea-values.yaml
Normal file
@@ -0,0 +1,86 @@
|
||||
image:
|
||||
tag: 1.22
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
className: nginx
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||
nginx.org/client-max-body-size: "0"
|
||||
apiVersion: networking.k8s.io/v1
|
||||
nginx.ingress.kubernetes.io/stream-snippet: |
|
||||
server {
|
||||
listen 22;
|
||||
proxy_pass [::]:22;
|
||||
}
|
||||
hosts:
|
||||
- host: gitea.reeseapps.com
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
tls:
|
||||
- hosts:
|
||||
- gitea.reeseapps.com
|
||||
secretName: gitea-tls-cert
|
||||
|
||||
persistence:
|
||||
enabled: true
|
||||
create: true
|
||||
claimName: data-gitea-0
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
|
||||
postgresql:
|
||||
enabled: true
|
||||
image:
|
||||
tag: 15
|
||||
primary:
|
||||
persistence:
|
||||
enabled: true
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
|
||||
postgresql-ha:
|
||||
enabled: false
|
||||
|
||||
gitea:
|
||||
admin:
|
||||
existingSecret: gitea-admin-secret
|
||||
config:
|
||||
service:
|
||||
DISABLE_REGISTRATION: true
|
||||
server:
|
||||
SSH_PORT: 22
|
||||
SSH_DOMAIN: git.reeseapps.com
|
||||
|
||||
service:
|
||||
ssh:
|
||||
port: 22
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
ipFamilyPolicy: SingleStack
|
||||
ipFamilies: ["IPv6"]
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: "external"
|
||||
external-dns.alpha.kubernetes.io/hostname: git.reeseapps.com
|
||||
|
||||
|
||||
redis-cluster:
|
||||
enabled: false
|
||||
image:
|
||||
tag: 7.2
|
||||
|
||||
deployment:
|
||||
tolerations:
|
||||
- key: "node.kubernetes.io/unreachable"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
|
||||
strategy:
|
||||
type: Recreate
|
||||
128
active/kubernetes_gitea/gitea.md
Normal file
128
active/kubernetes_gitea/gitea.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# Gitea
|
||||
|
||||
- [Gitea](#gitea)
|
||||
- [Staging](#staging)
|
||||
- [Install](#install)
|
||||
- [Backup and Restore](#backup-and-restore)
|
||||
- [Gitea Runners](#gitea-runners)
|
||||
- [Install](#install-1)
|
||||
- [Cache Cleanup](#cache-cleanup)
|
||||
|
||||
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/). We're not
|
||||
going to modify much, but we are going to solidify some of the default values in case
|
||||
they decide to change things. This is the first chart (besides ingress-nginx) where
|
||||
we need to pay attention to the MetalLB annotation. This has been set in the values.yaml
|
||||
file.
|
||||
|
||||
## Staging
|
||||
|
||||
There is a `gitea-staging.yaml` file with staging values. This should be installed in
|
||||
the `gitea-staging` namespace. Follow the instructions below, but replace the `gitea`
|
||||
namespace with `gitea-staging`. Staging is useful for testing major release upgrades,
|
||||
especially since Gitea tends to change how `values.yaml` is structured.
|
||||
|
||||
## Install
|
||||
|
||||
First we need to create the gitea admin secret
|
||||
|
||||
```bash
|
||||
kubectl create namespace gitea
|
||||
kubectl create secret generic gitea-admin-secret \
|
||||
-n gitea \
|
||||
--from-literal=username='gitea-admin' \
|
||||
--from-literal=password="$(pwgen -c -s 64 | head -n 1)" \
|
||||
--from-literal=email=''
|
||||
```
|
||||
|
||||
```bash
|
||||
helm repo add gitea-charts https://dl.gitea.io/charts/
|
||||
helm repo update
|
||||
helm upgrade --install \
|
||||
gitea \
|
||||
gitea-charts/gitea \
|
||||
--values active/kubernetes_gitea/gitea-values.yaml \
|
||||
--namespace gitea \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
## Backup and Restore
|
||||
|
||||
If you need to backup your database you can run:
|
||||
|
||||
```bash
|
||||
# Backup
|
||||
kubectl exec -it -n gitea gitea-postgresql-0 -- \
|
||||
pg_dump \
|
||||
--no-owner \
|
||||
--dbname=postgresql://gitea:gitea@localhost:5432 > gitea_backup.db
|
||||
|
||||
# Take gitea down to zero pods
|
||||
kubectl scale statefulset gitea --replicas 0
|
||||
|
||||
# Drop the existing database
|
||||
kubectl exec -it -n gitea gitea-postgresql-0 -- psql -U gitea
|
||||
|
||||
\c postgres;
|
||||
drop database gitea;
|
||||
CREATE DATABASE gitea WITH OWNER gitea TEMPLATE template0 ENCODING UTF8 LC_COLLATE 'en_US.UTF-8' LC_CTYPE 'en_US.UTF-8';
|
||||
exit
|
||||
|
||||
# restore from backup
|
||||
kubectl exec -it -n gitea gitea-postgresql-0 -- \
|
||||
psql \
|
||||
postgresql://gitea:gitea@localhost:5432 gitea < gitea_backup.db
|
||||
|
||||
# Restore gitea to 1 pod
|
||||
kubectl scale statefulset gitea --replicas 1
|
||||
```
|
||||
|
||||
## Gitea Runners
|
||||
|
||||
<https://docs.gitea.com/next/usage/actions/act-runner/#install-with-the-docker-image>
|
||||
|
||||
### Install
|
||||
|
||||
```bash
|
||||
touch config.yaml
|
||||
|
||||
# Add `-e CONFIG_FILE=/config.yaml` to mount a config file
|
||||
|
||||
docker run \
|
||||
-v $PWD/config.yaml:/config.yaml \
|
||||
-v $PWD/data:/data \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-e GITEA_INSTANCE_URL=https://gitea.reeseapps.com \
|
||||
-e GITEA_RUNNER_REGISTRATION_TOKEN=m8ZJGE5yEys6oC0trni1o2CkKrmufTnI7dxXZfIi \
|
||||
-e GITEA_RUNNER_NAME=kube_runner \
|
||||
-e GITEA_RUNNER_LABELS="ubuntu-latest:docker://catthehacker/ubuntu:act-latest", \
|
||||
--restart always \
|
||||
--name kube_runner \
|
||||
-d gitea/act_runner:latest
|
||||
```
|
||||
|
||||
### Cache Cleanup
|
||||
|
||||
Each org or project with a package registry will have its own cleanup rules. For example,
|
||||
services -> settings -> Packages -> Add Cleanup Rule will allow you to create a cleanup
|
||||
rule for packages stored under the "services" org. These cleanup rules should run automatically.
|
||||
|
||||
On the other hand, the docker builder cache will balloon out of control over time. The gitea
|
||||
docker runner is handled outside of Gitea's context, so you'll need to clean it up yourself.
|
||||
|
||||
```bash
|
||||
# Check used system resources
|
||||
docker system df
|
||||
```
|
||||
|
||||
You should run something like this on a schedule:
|
||||
|
||||
```bash
|
||||
# Prune the builder cache
|
||||
docker builder prune -a
|
||||
```
|
||||
|
||||
To run it every day at midnight: `crontab -e`
|
||||
|
||||
```bash
|
||||
0 0 * * * yes | docker builder prune -a
|
||||
```
|
||||
29
active/kubernetes_grafana/grafana.md
Normal file
29
active/kubernetes_grafana/grafana.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Grafana
|
||||
|
||||
## Helm
|
||||
|
||||
```bash
|
||||
helm repo add grafana https://grafana.github.io/helm-charts
|
||||
helm repo update
|
||||
helm upgrade --install my-grafana grafana/grafana \
|
||||
--namespace monitoring \
|
||||
--create-namespace \
|
||||
--values active/kubernetes_grafana/values.yaml
|
||||
|
||||
kubectl get secret --namespace monitoring my-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
|
||||
```
|
||||
|
||||
metrics-server.kube-system.svc.cluster.local:9090
|
||||
|
||||
## Kube Prometheus Stack
|
||||
|
||||
```bash
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo update
|
||||
|
||||
helm upgrade --install \
|
||||
kube-prometheus-stack \
|
||||
prometheus-community/kube-prometheus-stack \
|
||||
--namespace kube-system \
|
||||
--values active/kubernetes_grafana/helm-prom-stack-values.yaml
|
||||
```
|
||||
4748
active/kubernetes_grafana/helm-prom-stack-values.yaml
Normal file
4748
active/kubernetes_grafana/helm-prom-stack-values.yaml
Normal file
File diff suppressed because it is too large
Load Diff
1379
active/kubernetes_grafana/values.yaml
Normal file
1379
active/kubernetes_grafana/values.yaml
Normal file
File diff suppressed because it is too large
Load Diff
14
active/kubernetes_ingress-nginx/values.yaml
Normal file
14
active/kubernetes_ingress-nginx/values.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
controller:
|
||||
service:
|
||||
externalTrafficPolicy: Local
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: "external"
|
||||
metallb.universe.tf/allow-shared-ip: nginx
|
||||
external-dns.alpha.kubernetes.io/hostname: ingress-nginx.reeseapps.com
|
||||
ipFamilyPolicy: RequireDualStack
|
||||
ipFamilies:
|
||||
- IPv6
|
||||
- IPv4
|
||||
config:
|
||||
log-format-upstream: '| Remote Addr: $remote_addr:$server_port | Host: $host | Referer: $http_referer | $request | $time_local | $status |'
|
||||
allowSnippetAnnotations: true
|
||||
23
active/kubernetes_iperf3/.helmignore
Executable file
23
active/kubernetes_iperf3/.helmignore
Executable file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
23
active/kubernetes_iperf3/Chart.yaml
Executable file
23
active/kubernetes_iperf3/Chart.yaml
Executable file
@@ -0,0 +1,23 @@
|
||||
apiVersion: v2
|
||||
name: Iperf3
|
||||
description: A Simple Iperf3 Chart
|
||||
|
||||
# A chart can be either an 'application' or a 'library' chart.
|
||||
#
|
||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||
# to be deployed.
|
||||
#
|
||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||
type: application
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.1.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
appVersion: 1.16.0
|
||||
13
active/kubernetes_iperf3/iperf3.md
Normal file
13
active/kubernetes_iperf3/iperf3.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# Iperf3
|
||||
|
||||
This creates a basic iperf3 server.
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
iperf3 \
|
||||
./active/kubernetes_iperf3/iperf3 \
|
||||
--namespace iperf3 \
|
||||
--create-namespace
|
||||
```
|
||||
27
active/kubernetes_iperf3/templates/deployment.yaml
Normal file
27
active/kubernetes_iperf3/templates/deployment.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: iperf
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: iperf
|
||||
spec:
|
||||
containers:
|
||||
- name: iperf
|
||||
image: networkstatic/iperf3
|
||||
args: ["-s"]
|
||||
ports:
|
||||
- containerPort: 5201
|
||||
name: iperf
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "1m"
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
cpu: "4"
|
||||
17
active/kubernetes_iperf3/templates/service.yaml
Normal file
17
active/kubernetes_iperf3/templates/service.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: "internal"
|
||||
external-dns.alpha.kubernetes.io/hostname: {{ .Release.Name }}.reeseapps.com
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ipFamilies: ["IPv6"]
|
||||
selector:
|
||||
app.kubernetes.io/name: iperf
|
||||
ports:
|
||||
- name: iperf
|
||||
protocol: TCP
|
||||
port: 5201
|
||||
targetPort: iperf
|
||||
0
active/kubernetes_iperf3/values.yaml
Executable file
0
active/kubernetes_iperf3/values.yaml
Executable file
23
active/kubernetes_jellyfin/.helmignore
Executable file
23
active/kubernetes_jellyfin/.helmignore
Executable file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
23
active/kubernetes_jellyfin/Chart.yaml
Executable file
23
active/kubernetes_jellyfin/Chart.yaml
Executable file
@@ -0,0 +1,23 @@
|
||||
apiVersion: v2
|
||||
name: Nextcloud
|
||||
description: A Simple Nextcloud Chart
|
||||
|
||||
# A chart can be either an 'application' or a 'library' chart.
|
||||
#
|
||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||
# to be deployed.
|
||||
#
|
||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||
type: application
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.1.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
appVersion: 1.16.0
|
||||
11
active/kubernetes_jellyfin/jellyfin.md
Normal file
11
active/kubernetes_jellyfin/jellyfin.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# Jellyfin
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
jellyfin \
|
||||
./active/kubernetes_jellyfin \
|
||||
--namespace jellyfin \
|
||||
--create-namespace
|
||||
```
|
||||
74
active/kubernetes_jellyfin/templates/deployment.yaml
Normal file
74
active/kubernetes_jellyfin/templates/deployment.yaml
Normal file
@@ -0,0 +1,74 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: jellyfin
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: jellyfin
|
||||
spec:
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
runAsGroup: 1000
|
||||
fsGroup: 1000
|
||||
tolerations:
|
||||
- key: "node.kubernetes.io/unreachable"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
containers:
|
||||
- name: jellyfin
|
||||
image: {{ .Values.jellyfin.image }}
|
||||
ports:
|
||||
- containerPort: 8096
|
||||
name: http
|
||||
volumeMounts:
|
||||
- mountPath: /config
|
||||
name: config
|
||||
- mountPath: /cache
|
||||
name: cache
|
||||
- mountPath: /movies
|
||||
name: movies
|
||||
- mountPath: /shows
|
||||
name: shows
|
||||
- mountPath: /videos
|
||||
name: videos
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "1m"
|
||||
limits:
|
||||
memory: "8Gi"
|
||||
cpu: "24"
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Release.Name }}-config
|
||||
- name: cache
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Release.Name }}-cache
|
||||
- name: movies
|
||||
nfs:
|
||||
server: driveripper-lab.reeselink.com
|
||||
path: /mnt/enc0/smb/media/Movies
|
||||
readOnly: true
|
||||
- name: shows
|
||||
nfs:
|
||||
server: driveripper-lab.reeselink.com
|
||||
path: /mnt/enc0/smb/media/Shows
|
||||
readOnly: true
|
||||
- name: videos
|
||||
nfs:
|
||||
server: driveripper-lab.reeselink.com
|
||||
path: /mnt/enc0/smb/media/Videos
|
||||
readOnly: true
|
||||
25
active/kubernetes_jellyfin/templates/ingress.yaml
Normal file
25
active/kubernetes_jellyfin/templates/ingress.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||
nginx.org/client-max-body-size: "0"
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
- host: {{ .Values.jellyfin.domain }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: jellyfin
|
||||
port:
|
||||
name: http
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ .Values.jellyfin.domain }}
|
||||
secretName: jellyfin-tls-cert
|
||||
27
active/kubernetes_jellyfin/templates/pvc.yaml
Normal file
27
active/kubernetes_jellyfin/templates/pvc.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Release.Name }}-config
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Release.Name }}-cache
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 16Gi
|
||||
17
active/kubernetes_jellyfin/templates/service.yaml
Normal file
17
active/kubernetes_jellyfin/templates/service.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
spec:
|
||||
ipFamilyPolicy: PreferDualStack
|
||||
ipFamilies:
|
||||
- IPv6
|
||||
- IPv4
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: jellyfin
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
||||
3
active/kubernetes_jellyfin/values.yaml
Executable file
3
active/kubernetes_jellyfin/values.yaml
Executable file
@@ -0,0 +1,3 @@
|
||||
jellyfin:
|
||||
image: jellyfin/jellyfin:10
|
||||
domain: jellyfin.reeseapps.com
|
||||
@@ -0,0 +1,10 @@
|
||||
- op: replace # action
|
||||
path: /data/config.json # resource we want to change
|
||||
value: |-
|
||||
{
|
||||
"storageClassConfigs": {
|
||||
"local-path": {
|
||||
"sharedFileSystemPath": "/opt/local-path-provisioner"
|
||||
}
|
||||
}
|
||||
}
|
||||
14
active/kubernetes_local-path-provisioner/kustomization.yaml
Normal file
14
active/kubernetes_local-path-provisioner/kustomization.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
# Creates local path storage and a storage class
|
||||
# Patched by ConfigMap-patch.yaml and namespace-patch.yaml
|
||||
resources:
|
||||
- local-path-storage.yaml
|
||||
patches:
|
||||
- target:
|
||||
group: ""
|
||||
version: v1
|
||||
kind: ConfigMap
|
||||
path: ConfigMap-patch.yaml
|
||||
- target: {}
|
||||
path: namespace-patch.yaml
|
||||
@@ -0,0 +1,17 @@
|
||||
|
||||
# Local Path Provisioner Install
|
||||
|
||||
1. `mkdir /var/lib/rancher/k3s/storage`
|
||||
2. Edit fstab to mount your drive to `/var/lib/rancher/k3s/storage`
|
||||
3. `systemctl daemon-reload`
|
||||
4. `mount -a`
|
||||
|
||||
<https://github.com/rancher/local-path-provisioner/tree/master/deploy/chart/local-path-provisioner>
|
||||
|
||||
```bash
|
||||
# Download the updated template from github
|
||||
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.31" > active/kubernetes_local-path-provisioner/local-path-storage.yaml
|
||||
|
||||
# Apply customizations (ssd/hdd storage, read write many support)
|
||||
kubectl kustomize active/kubernetes_local-path-provisioner | kubectl apply -f -
|
||||
```
|
||||
189
active/kubernetes_local-path-provisioner/local-path-storage.yaml
Normal file
189
active/kubernetes_local-path-provisioner/local-path-storage.yaml
Normal file
@@ -0,0 +1,189 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: local-path-storage
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: local-path
|
||||
provisioner: rancher.io/local-path
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: local-path-provisioner-service-account
|
||||
namespace: local-path-storage
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: local-path-provisioner-role
|
||||
namespace: local-path-storage
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- patch
|
||||
- update
|
||||
- delete
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: local-path-provisioner-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
- persistentvolumeclaims
|
||||
- configmaps
|
||||
- pods
|
||||
- pods/log
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- persistentvolumes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- patch
|
||||
- update
|
||||
- delete
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
- apiGroups:
|
||||
- storage.k8s.io
|
||||
resources:
|
||||
- storageclasses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: local-path-provisioner-bind
|
||||
namespace: local-path-storage
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: local-path-provisioner-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: local-path-provisioner-service-account
|
||||
namespace: local-path-storage
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: local-path-provisioner-bind
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: local-path-provisioner-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: local-path-provisioner-service-account
|
||||
namespace: local-path-storage
|
||||
---
|
||||
apiVersion: v1
|
||||
data:
|
||||
config.json: |-
|
||||
{
|
||||
"nodePathMap":[
|
||||
{
|
||||
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
|
||||
"paths":["/opt/local-path-provisioner"]
|
||||
}
|
||||
]
|
||||
}
|
||||
helperPod.yaml: |-
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: helper-pod
|
||||
spec:
|
||||
priorityClassName: system-node-critical
|
||||
tolerations:
|
||||
- key: node.kubernetes.io/disk-pressure
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
containers:
|
||||
- name: helper-pod
|
||||
image: busybox
|
||||
imagePullPolicy: IfNotPresent
|
||||
setup: |-
|
||||
#!/bin/sh
|
||||
set -eu
|
||||
mkdir -m 0777 -p "$VOL_DIR"
|
||||
teardown: |-
|
||||
#!/bin/sh
|
||||
set -eu
|
||||
rm -rf "$VOL_DIR"
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: local-path-config
|
||||
namespace: local-path-storage
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: local-path-provisioner
|
||||
namespace: local-path-storage
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: local-path-provisioner
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: local-path-provisioner
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- local-path-provisioner
|
||||
- --debug
|
||||
- start
|
||||
- --config
|
||||
- /etc/config/config.json
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: CONFIG_MOUNT_PATH
|
||||
value: /etc/config/
|
||||
image: rancher/local-path-provisioner:v0.0.31
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: local-path-provisioner
|
||||
volumeMounts:
|
||||
- mountPath: /etc/config/
|
||||
name: config-volume
|
||||
serviceAccountName: local-path-provisioner-service-account
|
||||
volumes:
|
||||
- configMap:
|
||||
name: local-path-config
|
||||
name: config-volume
|
||||
@@ -0,0 +1,3 @@
|
||||
- op: replace # action
|
||||
path: /metadata/namespace # resource we want to change
|
||||
value: kube-system
|
||||
19
active/kubernetes_metallb/addresspool.yaml
Normal file
19
active/kubernetes_metallb/addresspool.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: unifi-pool
|
||||
namespace: kube-system
|
||||
spec:
|
||||
addresses:
|
||||
- 2603:6013:3140:105:10:5:0:10-2603:6013:3140:105:10:5:0:210
|
||||
- 10.5.0.10-10.5.0.210
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: l2advertisement
|
||||
namespace: kube-system
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- unifi-pool
|
||||
23
active/kubernetes_minecraft/.helmignore
Executable file
23
active/kubernetes_minecraft/.helmignore
Executable file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
23
active/kubernetes_minecraft/Chart.yaml
Executable file
23
active/kubernetes_minecraft/Chart.yaml
Executable file
@@ -0,0 +1,23 @@
|
||||
apiVersion: v2
|
||||
name: Minecraft
|
||||
description: A Simple Minecraft Chart
|
||||
|
||||
# A chart can be either an 'application' or a 'library' chart.
|
||||
#
|
||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||
# to be deployed.
|
||||
#
|
||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||
type: application
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.1.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
appVersion: 1.16.0
|
||||
46
active/kubernetes_minecraft/minecraft.md
Normal file
46
active/kubernetes_minecraft/minecraft.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Minecraft
|
||||
|
||||
Minecraft is available through the custom helm chart (including a server downloader). The example
|
||||
below installs nimcraft. For each installation you'll want to create your own values.yaml
|
||||
with a new port. The server-downloader is called "minecraft_get_server" and is available on
|
||||
[Github](https://github.com/ducoterra/minecraft_get_server).
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
testcraft \
|
||||
./active/kubernetes_minecraft \
|
||||
--namespace minecraft \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
## Nimcraft
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
nimcraft \
|
||||
./active/kubernetes_minecraft \
|
||||
--namespace minecraft \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
## Courtnie
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
courtniecraft \
|
||||
./active/kubernetes_minecraft \
|
||||
--namespace minecraft \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
## Camcraft
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
camcraft1 \
|
||||
./active/kubernetes_minecraft \
|
||||
--namespace minecraft \
|
||||
--create-namespace
|
||||
```
|
||||
8
active/kubernetes_minecraft/templates/configmap.yaml
Normal file
8
active/kubernetes_minecraft/templates/configmap.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
data:
|
||||
eula.txt: |
|
||||
eula=true
|
||||
server.properties: {{ toYaml .Values.server_props | indent 2 }}
|
||||
72
active/kubernetes_minecraft/templates/deployment.yaml
Normal file
72
active/kubernetes_minecraft/templates/deployment.yaml
Normal file
@@ -0,0 +1,72 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ .Release.Name }}
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ .Release.Name }}
|
||||
spec:
|
||||
tolerations:
|
||||
- key: "node.kubernetes.io/unreachable"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
initContainers:
|
||||
- name: get-version
|
||||
image: {{ .Values.get_server.image }}
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: SERVER_VERSION
|
||||
value: {{ .Values.server_version | quote }}
|
||||
volumeMounts:
|
||||
- mountPath: /downloads
|
||||
name: data
|
||||
containers:
|
||||
- name: {{ .Release.Name }}
|
||||
image: {{ .Values.image }}
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 25565
|
||||
volumeMounts:
|
||||
- mountPath: /mc_data
|
||||
name: data
|
||||
- name: properties
|
||||
mountPath: /mc_data/server.properties
|
||||
subPath: server.properties
|
||||
- name: properties
|
||||
mountPath: /mc_data/eula.txt
|
||||
subPath: eula.txt
|
||||
tty: true
|
||||
stdin: true
|
||||
env:
|
||||
- name: MAX_RAM
|
||||
value: {{ .Values.max_ram | quote }}
|
||||
- name: MIN_RAM
|
||||
value: "1"
|
||||
resources:
|
||||
requests:
|
||||
memory: {{ div .Values.max_ram 2 }}Gi
|
||||
cpu: 1m
|
||||
limits:
|
||||
memory: {{ add 1 .Values.max_ram }}Gi
|
||||
cpu: {{ .Values.max_cpu | quote }}
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Release.Name }}
|
||||
- name: properties
|
||||
configMap:
|
||||
name: {{ .Release.Name }}
|
||||
securityContext:
|
||||
fsGroup: 2000
|
||||
13
active/kubernetes_minecraft/templates/pvc.yaml
Normal file
13
active/kubernetes_minecraft/templates/pvc.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
spec:
|
||||
storageClassName: ssd
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 32Gi
|
||||
17
active/kubernetes_minecraft/templates/service.yaml
Normal file
17
active/kubernetes_minecraft/templates/service.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: "external"
|
||||
external-dns.alpha.kubernetes.io/hostname: {{ .Release.Name }}.reeseapps.com
|
||||
spec:
|
||||
ipFamilies: ["IPv6"]
|
||||
externalTrafficPolicy: Cluster
|
||||
selector:
|
||||
app: {{ .Release.Name }}
|
||||
ports:
|
||||
- port: {{ .Values.port }}
|
||||
targetPort: 25565
|
||||
name: {{ .Release.Name }}
|
||||
type: LoadBalancer
|
||||
65
active/kubernetes_minecraft/values.yaml
Executable file
65
active/kubernetes_minecraft/values.yaml
Executable file
@@ -0,0 +1,65 @@
|
||||
image: ducoterra/minecraft:1.0.4
|
||||
get_server:
|
||||
image: ducoterra/get-minecraft:2.0.2
|
||||
server_version: "1.21.3"
|
||||
port: 25565
|
||||
max_cpu: 4
|
||||
max_ram: 8
|
||||
server_props: |
|
||||
enable-jmx-monitoring=false
|
||||
rcon.port=25575
|
||||
level-seed=
|
||||
gamemode=survival
|
||||
enable-command-block=false
|
||||
enable-query=false
|
||||
generator-settings={}
|
||||
enforce-secure-profile=true
|
||||
level-name=world
|
||||
motd=A Minecraft Server
|
||||
query.port=25565
|
||||
pvp=true
|
||||
generate-structures=true
|
||||
max-chained-neighbor-updates=1000000
|
||||
difficulty=easy
|
||||
network-compression-threshold=256
|
||||
max-tick-time=600000
|
||||
require-resource-pack=false
|
||||
use-native-transport=true
|
||||
max-players=20
|
||||
online-mode=true
|
||||
enable-status=true
|
||||
allow-flight=false
|
||||
initial-disabled-packs=
|
||||
broadcast-rcon-to-ops=true
|
||||
view-distance=10
|
||||
server-ip=
|
||||
resource-pack-prompt=
|
||||
allow-nether=true
|
||||
server-port=25565
|
||||
enable-rcon=false
|
||||
sync-chunk-writes=true
|
||||
op-permission-level=4
|
||||
prevent-proxy-connections=false
|
||||
hide-online-players=false
|
||||
resource-pack=
|
||||
entity-broadcast-range-percentage=100
|
||||
simulation-distance=10
|
||||
rcon.password=
|
||||
player-idle-timeout=0
|
||||
force-gamemode=false
|
||||
rate-limit=0
|
||||
hardcore=false
|
||||
white-list=true
|
||||
broadcast-console-to-ops=true
|
||||
spawn-npcs=true
|
||||
spawn-animals=true
|
||||
log-ips=true
|
||||
function-permission-level=2
|
||||
initial-enabled-packs=vanilla
|
||||
level-type=minecraft\:normal
|
||||
text-filtering-config=
|
||||
spawn-monsters=true
|
||||
enforce-whitelist=true
|
||||
spawn-protection=16
|
||||
resource-pack-sha1=
|
||||
max-world-size=29999984
|
||||
5
active/kubernetes_pihole_blocklist/Dockerfile
Normal file
5
active/kubernetes_pihole_blocklist/Dockerfile
Normal file
@@ -0,0 +1,5 @@
|
||||
FROM nginx
|
||||
COPY blocklists.txt /usr/share/nginx/html
|
||||
RUN for url in $(cat /usr/share/nginx/html/blocklists.txt);\
|
||||
do echo >> /usr/share/nginx/html/hosts && curl $url >> /usr/share/nginx/html/hosts;\
|
||||
done
|
||||
48
active/kubernetes_pihole_blocklist/blocklists.txt
Normal file
48
active/kubernetes_pihole_blocklist/blocklists.txt
Normal file
@@ -0,0 +1,48 @@
|
||||
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
|
||||
https://raw.githubusercontent.com/PolishFiltersTeam/KADhosts/master/KADhosts.txt
|
||||
https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.Spam/hosts
|
||||
https://v.firebog.net/hosts/static/w3kbl.txt
|
||||
https://raw.githubusercontent.com/matomo-org/referrer-spam-blacklist/master/spammers.txt
|
||||
https://someonewhocares.org/hosts/zero/hosts
|
||||
https://raw.githubusercontent.com/VeleSila/yhosts/master/hosts
|
||||
https://winhelp2002.mvps.org/hosts.txt
|
||||
https://v.firebog.net/hosts/neohostsbasic.txt
|
||||
https://raw.githubusercontent.com/RooneyMcNibNug/pihole-stuff/master/SNAFU.txt
|
||||
https://paulgb.github.io/BarbBlock/blacklists/hosts-file.txt
|
||||
https://adaway.org/hosts.txt
|
||||
https://v.firebog.net/hosts/AdguardDNS.txt
|
||||
https://v.firebog.net/hosts/Admiral.txt
|
||||
https://raw.githubusercontent.com/anudeepND/blacklist/master/adservers.txt
|
||||
https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt
|
||||
https://v.firebog.net/hosts/Easylist.txt
|
||||
https://pgl.yoyo.org/adservers/serverlist.php?hostformat=hosts&showintro=0&mimetype=plaintext
|
||||
https://raw.githubusercontent.com/FadeMind/hosts.extras/master/UncheckyAds/hosts
|
||||
https://raw.githubusercontent.com/bigdargon/hostsVN/master/hosts
|
||||
https://raw.githubusercontent.com/jdlingyu/ad-wars/master/hosts
|
||||
https://v.firebog.net/hosts/Easyprivacy.txt
|
||||
https://v.firebog.net/hosts/Prigent-Ads.txt
|
||||
https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.2o7Net/hosts
|
||||
https://raw.githubusercontent.com/crazy-max/WindowsSpyBlocker/master/data/hosts/spy.txt
|
||||
https://hostfiles.frogeye.fr/firstparty-trackers-hosts.txt
|
||||
https://www.github.developerdan.com/hosts/lists/ads-and-tracking-extended.txt
|
||||
https://raw.githubusercontent.com/Perflyst/PiHoleBlocklist/master/android-tracking.txt
|
||||
https://raw.githubusercontent.com/Perflyst/PiHoleBlocklist/master/SmartTV.txt
|
||||
https://raw.githubusercontent.com/Perflyst/PiHoleBlocklist/master/AmazonFireTV.txt
|
||||
https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-blocklist.txt
|
||||
https://raw.githubusercontent.com/DandelionSprout/adfilt/master/Alternate%20versions%20Anti-Malware%20List/AntiMalwareHosts.txt
|
||||
https://osint.digitalside.it/Threat-Intel/lists/latestdomains.txt
|
||||
https://s3.amazonaws.com/lists.disconnect.me/simple_malvertising.txt
|
||||
https://v.firebog.net/hosts/Prigent-Crypto.txt
|
||||
https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.Risk/hosts
|
||||
https://bitbucket.org/ethanr/dns-blacklists/raw/8575c9f96e5b4a1308f2f12394abd86d0927a4a0/bad_lists/Mandiant_APT1_Report_Appendix_D.txt
|
||||
https://phishing.army/download/phishing_army_blocklist_extended.txt
|
||||
https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-malware.txt
|
||||
https://v.firebog.net/hosts/RPiList-Malware.txt
|
||||
https://v.firebog.net/hosts/RPiList-Phishing.txt
|
||||
https://raw.githubusercontent.com/Spam404/lists/master/main-blacklist.txt
|
||||
https://raw.githubusercontent.com/AssoEchap/stalkerware-indicators/master/generated/hosts
|
||||
https://urlhaus.abuse.ch/downloads/hostfile/
|
||||
https://malware-filter.gitlab.io/malware-filter/phishing-filter-hosts.txt
|
||||
https://v.firebog.net/hosts/Prigent-Malware.txt
|
||||
https://zerodot1.gitlab.io/CoinBlockerLists/hosts_browser
|
||||
https://blocklistproject.github.io/Lists/everything.txt
|
||||
8
active/kubernetes_pihole_blocklist/docker-compose.yaml
Normal file
8
active/kubernetes_pihole_blocklist/docker-compose.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
repo:
|
||||
image: ducoterra/blocklist:0.0.2
|
||||
build: .
|
||||
ports:
|
||||
- 8080:80
|
||||
23
active/kubernetes_pihole_blocklist/helm/.helmignore
Normal file
23
active/kubernetes_pihole_blocklist/helm/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
24
active/kubernetes_pihole_blocklist/helm/Chart.yaml
Normal file
24
active/kubernetes_pihole_blocklist/helm/Chart.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
apiVersion: v2
|
||||
name: blocklist
|
||||
description: A Helm chart for Kubernetes
|
||||
|
||||
# A chart can be either an 'application' or a 'library' chart.
|
||||
#
|
||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||
# to be deployed.
|
||||
#
|
||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||
type: application
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.1.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
# It is recommended to use it with quotes.
|
||||
appVersion: "1.16.0"
|
||||
@@ -0,0 +1,73 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ .Release.Name }}
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ .Release.Name }}
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: {{ .Values.image }}
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "1m"
|
||||
limits:
|
||||
memory: "1Gi"
|
||||
cpu: "1"
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: {{ .Release.Name }}
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
||||
|
||||
---
|
||||
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||
nginx.org/client-max-body-size: "0"
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
- host: {{ .Values.domain }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: {{ .Release.Name }}
|
||||
port:
|
||||
name: http
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ .Values.domain }}
|
||||
secretName: {{ .Release.Name }}-tls-cert
|
||||
2
active/kubernetes_pihole_blocklist/helm/values.yaml
Normal file
2
active/kubernetes_pihole_blocklist/helm/values.yaml
Normal file
@@ -0,0 +1,2 @@
|
||||
image: ducoterra/blocklist:0.0.2
|
||||
domain: blocklist.reeseapps.com
|
||||
29
active/kubernetes_pihole_blocklist/pihole_blocklist.md
Normal file
29
active/kubernetes_pihole_blocklist/pihole_blocklist.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Pihole
|
||||
|
||||
See `podman` for the pihole installation. This is just the blocklist.
|
||||
|
||||
## Blocklists
|
||||
|
||||
Add this to your pihole adlists:
|
||||
|
||||
<https://blocklist.reeseapps.com/hosts>
|
||||
|
||||
## Updating blocklist.reeseapps.com
|
||||
|
||||
Add lists to blocklists.txt and build the project.
|
||||
|
||||
Blocklists are built into an nginx image to be served with docker:
|
||||
|
||||
```bash
|
||||
podman-compose build
|
||||
podman-compose push
|
||||
|
||||
helm upgrade --install \
|
||||
--namespace pihole \
|
||||
--create-namespace \
|
||||
blocklist ./pihole_blocklist/helm
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
<https://v.firebog.net/hosts/lists.php>
|
||||
23
active/kubernetes_snapdrop/.helmignore
Executable file
23
active/kubernetes_snapdrop/.helmignore
Executable file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
23
active/kubernetes_snapdrop/Chart.yaml
Executable file
23
active/kubernetes_snapdrop/Chart.yaml
Executable file
@@ -0,0 +1,23 @@
|
||||
apiVersion: v2
|
||||
name: Nextcloud
|
||||
description: A Simple Nextcloud Chart
|
||||
|
||||
# A chart can be either an 'application' or a 'library' chart.
|
||||
#
|
||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||
# to be deployed.
|
||||
#
|
||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||
type: application
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.1.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
appVersion: 1.16.0
|
||||
13
active/kubernetes_snapdrop/snapdrop.md
Normal file
13
active/kubernetes_snapdrop/snapdrop.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# Snapdrop
|
||||
|
||||
Snapdrop is a file sharing app that allows airdrop-like functionality over the web
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
snapdrop \
|
||||
./active/kubernetes_snapdrop \
|
||||
--namespace snapdrop \
|
||||
--create-namespace
|
||||
```
|
||||
10
active/kubernetes_snapdrop/templates/configmap.yaml
Normal file
10
active/kubernetes_snapdrop/templates/configmap.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ .Release.Name }}-snapdrop
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
data:
|
||||
PUID: "1000"
|
||||
PGID: "1000"
|
||||
TZ: Etc/UTC
|
||||
38
active/kubernetes_snapdrop/templates/deployment.yaml
Normal file
38
active/kubernetes_snapdrop/templates/deployment.yaml
Normal file
@@ -0,0 +1,38 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: snapdrop
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: snapdrop
|
||||
spec:
|
||||
containers:
|
||||
- name: snapdrop
|
||||
image: {{ .Values.snapdrop.image }}
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
name: http
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: {{ .Release.Name }}-snapdrop
|
||||
volumeMounts:
|
||||
- mountPath: /config
|
||||
name: config
|
||||
resources:
|
||||
requests:
|
||||
memory: "1Gi"
|
||||
cpu: "1m"
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
cpu: "4"
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Release.Name }}-config
|
||||
25
active/kubernetes_snapdrop/templates/ingress.yaml
Normal file
25
active/kubernetes_snapdrop/templates/ingress.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||
nginx.org/client-max-body-size: "0"
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
- host: {{ .Values.snapdrop.domain }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: snapdrop
|
||||
port:
|
||||
name: http
|
||||
tls:
|
||||
- hosts:
|
||||
- {{ .Values.snapdrop.domain }}
|
||||
secretName: snapdrop-tls-cert
|
||||
12
active/kubernetes_snapdrop/templates/pvc.yaml
Normal file
12
active/kubernetes_snapdrop/templates/pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Release.Name }}-config
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
13
active/kubernetes_snapdrop/templates/service.yaml
Normal file
13
active/kubernetes_snapdrop/templates/service.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: snapdrop
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
||||
3
active/kubernetes_snapdrop/values.yaml
Executable file
3
active/kubernetes_snapdrop/values.yaml
Executable file
@@ -0,0 +1,3 @@
|
||||
snapdrop:
|
||||
image: linuxserver/pairdrop:latest
|
||||
domain: snapdrop.reeseapps.com
|
||||
23
active/kubernetes_userspace/.helmignore
Executable file
23
active/kubernetes_userspace/.helmignore
Executable file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
23
active/kubernetes_userspace/Chart.yaml
Executable file
23
active/kubernetes_userspace/Chart.yaml
Executable file
@@ -0,0 +1,23 @@
|
||||
apiVersion: v2
|
||||
name: namespace
|
||||
description: A Helm chart for Kubernetes
|
||||
|
||||
# A chart can be either an 'application' or a 'library' chart.
|
||||
#
|
||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||
# to be deployed.
|
||||
#
|
||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||
type: application
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.0.1
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
appVersion: 1.16.0
|
||||
40
active/kubernetes_userspace/certsigner.yaml
Normal file
40
active/kubernetes_userspace/certsigner.yaml
Normal file
@@ -0,0 +1,40 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: certsigner
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: certsigner
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: certsigner
|
||||
spec:
|
||||
containers:
|
||||
- name: certsigner
|
||||
image: python:latest
|
||||
command: ["cat"]
|
||||
tty: true
|
||||
resources:
|
||||
requests:
|
||||
memory: 1Mi
|
||||
cpu: 1m
|
||||
limits:
|
||||
memory: 100Mi
|
||||
cpu: 100m
|
||||
volumeMounts:
|
||||
- mountPath: /keys
|
||||
name: keys
|
||||
- mountPath: /certs
|
||||
name: certs
|
||||
volumes:
|
||||
- name: keys
|
||||
secret:
|
||||
secretName: certsigner
|
||||
- name: certs
|
||||
emptyDir:
|
||||
sizeLimit: 500Mi
|
||||
restartPolicy: Always
|
||||
38
active/kubernetes_userspace/scripts/removeuser.sh
Executable file
38
active/kubernetes_userspace/scripts/removeuser.sh
Executable file
@@ -0,0 +1,38 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Use
|
||||
# ./removeuserspace <ssh_address> <server_fqdn (for kubectl)> <user>
|
||||
|
||||
export SERVER=$1
|
||||
export FQDN=$2
|
||||
export USER=$3
|
||||
|
||||
export CERT_DIR=$HOME/.kube/$FQDN/users/$USER
|
||||
export CA_CERT_DIR=$HOME/.kube/$FQDN
|
||||
|
||||
export SERVER_USER_DIR="~/.kube/users/$USER"
|
||||
export SERVER_NAME=$(echo "$FQDN" | sed 's/\./-/g')
|
||||
export SERVER_USER="$USER-$SERVER_NAME"
|
||||
|
||||
export KUBECONFIG="$HOME/.kube/$USER-config"
|
||||
|
||||
echo "Checking if project namespace exists"
|
||||
exists=$(ssh $SERVER "kubectl get namespace --output=jsonpath=\"{.items[?(@.metadata.name=='$USER')].metadata.name}\"")
|
||||
if [ -z $exists ]; then
|
||||
echo "Namespace not found, nothing to delete"
|
||||
exit 1
|
||||
else
|
||||
echo "Namespace exists, deleting"
|
||||
fi
|
||||
|
||||
echo "Deleting user namespace"
|
||||
ssh $SERVER "kubectl delete -f $SERVER_USER_DIR/namespace.yaml"
|
||||
|
||||
echo "Deleting remote cert dir"
|
||||
ssh $SERVER "rm -rf $SERVER_USER_DIR"
|
||||
|
||||
echo "Deleting local cert dir"
|
||||
rm -rf $CERT_DIR
|
||||
|
||||
echo "Removing from kubeconfig"
|
||||
rm $KUBECONFIG
|
||||
10
active/kubernetes_userspace/scripts/setup.sh
Executable file
10
active/kubernetes_userspace/scripts/setup.sh
Executable file
@@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Use
|
||||
# ./setup.sh <server_fqdn>
|
||||
|
||||
export SERVER=$1
|
||||
|
||||
ssh -t $SERVER kubectl -n kube-system create secret generic certsigner --from-file /var/lib/rancher/k3s/server/tls/client-ca.crt --from-file /var/lib/rancher/k3s/server/tls/client-ca.key
|
||||
scp ./userspace/certsigner.yaml $SERVER:~/certsigner.yaml
|
||||
ssh $SERVER kubectl apply -f certsigner.yaml
|
||||
140
active/kubernetes_userspace/scripts/upsertuser.sh
Executable file
140
active/kubernetes_userspace/scripts/upsertuser.sh
Executable file
@@ -0,0 +1,140 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Use
|
||||
# ./upsert.sh <ssh_address> <server_fqdn (for kubectl)> <user>
|
||||
# Note, do not specify https:// or :port for the fqdn, just give the domain
|
||||
# Port is expected to be 6443. You can change this later in the generated conf
|
||||
# ./upsert.sh node1 containers.reeseapps.com testuser
|
||||
# ./upsert.sh 192.168.1.10 mydomain.ddns.net admin
|
||||
|
||||
export SERVER=$1
|
||||
export FQDN=$2
|
||||
export KUBE_USER=$3
|
||||
|
||||
export CERT_DIR=$HOME/.kube/$FQDN/users/$KUBE_USER
|
||||
export CA_CERT_DIR=$HOME/.kube/$FQDN
|
||||
|
||||
export SERVER_USER_DIR="~/.kube/users/$KUBE_USER"
|
||||
export SERVER_NAME=$(echo "$FQDN" | sed 's/\./-/g')
|
||||
export SERVER_USER="$KUBE_USER-$SERVER_NAME"
|
||||
|
||||
export KUBECONFIG="$HOME/.kube/$KUBE_USER-config"
|
||||
|
||||
if [ -z $KUBE_USER ]; then
|
||||
echo "No arguments supplied! Format is ./upsert.sh <SERVER_FQDN> <USER>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z $SERVER ]; then
|
||||
echo "No server supplied for user $KUBE_USER"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ $KUBE_USER = "admin" ]; then
|
||||
echo "Creating admin user for server $SERVER"
|
||||
fi
|
||||
|
||||
echo "Creating cert dir"
|
||||
mkdir -p $CERT_DIR
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Couldn't create cert dir at $CERT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Generating openssl cert"
|
||||
podman run -it -v $CERT_DIR:/$KUBE_USER python:latest openssl genrsa -out /$KUBE_USER/$KUBE_USER.key 2048
|
||||
|
||||
if [ $KUBE_USER = "admin" ]; then
|
||||
podman run -it -v $CERT_DIR:/$KUBE_USER python:latest openssl req -new -key /$KUBE_USER/$KUBE_USER.key -out /$KUBE_USER/$KUBE_USER.csr -subj "/CN=$KUBE_USER/O=system:masters"
|
||||
else
|
||||
podman run -it -v $CERT_DIR:/$KUBE_USER python:latest openssl req -new -key /$KUBE_USER/$KUBE_USER.key -out /$KUBE_USER/$KUBE_USER.csr -subj "/CN=$KUBE_USER/O=user"
|
||||
fi
|
||||
# /CN=admin/O=manager
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Couldn't create cert with Podman. Are you sure it's running?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Creating namespace dir on server"
|
||||
ssh $SERVER "mkdir -p $SERVER_USER_DIR"
|
||||
echo "Copying client csr to server cert dir"
|
||||
scp $CERT_DIR/$KUBE_USER.csr $SERVER:$SERVER_USER_DIR/$KUBE_USER.csr
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed to copy client csr to server cert dir"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Getting cert signing pod"
|
||||
export CERT_POD=$(ssh $SERVER "kubectl get pod -n kube-system --selector=app=certsigner --output=jsonpath={.items..metadata.name}")
|
||||
|
||||
if [ -z $CERT_POD ]; then
|
||||
echo "Installing certsigner"
|
||||
helm template certsigner ./certsigner | ssh $SERVER "sudo -t -E kubectl apply -f -"
|
||||
fi
|
||||
|
||||
while [ -z $CERT_POD ]; do
|
||||
echo "Getting cert signing pod"
|
||||
export CERT_POD=$(ssh $SERVER "kubectl get pod -n kube-system --selector=app=certsigner --output=jsonpath={.items..metadata.name}")
|
||||
sleep 2
|
||||
done
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed to install certsigner."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Signing cert with pod $CERT_POD"
|
||||
ssh $SERVER "kubectl -n kube-system cp $SERVER_USER_DIR/$KUBE_USER.csr $CERT_POD:/certs/$KUBE_USER.csr"
|
||||
ssh $SERVER "kubectl -n kube-system exec $CERT_POD -- openssl x509 -in /certs/$KUBE_USER.csr -req -CA /keys/client-ca.crt -CAkey /keys/client-ca.key -set_serial $(python -c 'import random; print(random.randint(1000000000, 9999999999))') -out /certs/$KUBE_USER.crt -days 5000"
|
||||
ssh $SERVER "kubectl -n kube-system cp $CERT_POD:/certs/$KUBE_USER.crt ~/.kube/users/$KUBE_USER/$KUBE_USER.crt"
|
||||
echo "retrieving signed cert"
|
||||
scp $SERVER:$SERVER_USER_DIR/$KUBE_USER.crt $CERT_DIR/$KUBE_USER.crt
|
||||
|
||||
echo "retrieving server ca"
|
||||
wget --no-check-certificate https://$FQDN:6443/cacerts -O $CA_CERT_DIR/server-ca.pem
|
||||
|
||||
echo "creating $FQDN-$KUBE_USER context"
|
||||
kubectl config set-context $FQDN-$KUBE_USER
|
||||
|
||||
echo "setting $FQDN-$KUBE_USER as current context"
|
||||
kubectl config set current-context $FQDN-$KUBE_USER
|
||||
|
||||
echo "adding server to config with new context $FQDN-$KUBE_USER"
|
||||
kubectl config set-cluster $FQDN --server=https://$FQDN:6443 --certificate-authority=$CA_CERT_DIR/server-ca.pem
|
||||
kubectl config set contexts.$(kubectl config current-context).cluster $FQDN
|
||||
|
||||
echo "adding user to config file"
|
||||
kubectl config set-credentials $SERVER_USER --client-certificate=$CERT_DIR/$KUBE_USER.crt --client-key=$CERT_DIR/$KUBE_USER.key
|
||||
|
||||
echo "setting user context"
|
||||
kubectl config set contexts.$(kubectl config current-context).user $SERVER_USER
|
||||
|
||||
if [ $KUBE_USER = "admin" ]; then
|
||||
echo "Admin user created, skipping namespace"
|
||||
echo "export KUBECONFIG=$KUBECONFIG"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Templating namespace with helm and copying to server"
|
||||
helm template $KUBE_USER --set user=$KUBE_USER ./userspace | ssh $SERVER "cat - > $SERVER_USER_DIR/namespace.yaml"
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed to template namespace. Is helm installed?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Creating namespace from template"
|
||||
ssh $SERVER "kubectl apply -f $SERVER_USER_DIR/namespace.yaml"
|
||||
|
||||
echo "Setting namespace context"
|
||||
kubectl config set contexts.$(kubectl config current-context).namespace $KUBE_USER
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed to create namespace"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "export KUBECONFIG=$KUBECONFIG"
|
||||
14
active/kubernetes_userspace/templates/limitrange.yaml
Executable file
14
active/kubernetes_userspace/templates/limitrange.yaml
Executable file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: default
|
||||
namespace: {{ .Release.Name }}
|
||||
spec:
|
||||
limits:
|
||||
- type: Container
|
||||
default:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
defaultRequest:
|
||||
memory: 1Mi
|
||||
cpu: 1m
|
||||
57
active/kubernetes_userspace/templates/namespace-manager-role.yaml
Executable file
57
active/kubernetes_userspace/templates/namespace-manager-role.yaml
Executable file
@@ -0,0 +1,57 @@
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: namespace-manager
|
||||
namespace: {{ .Release.Name }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
- extensions
|
||||
- apps
|
||||
- batch
|
||||
- autoscaling
|
||||
- networking.k8s.io
|
||||
- rbac.authorization.k8s.io
|
||||
- metrics.k8s.io
|
||||
- policy
|
||||
- cert-manager.io
|
||||
resources:
|
||||
- deployments
|
||||
- replicasets
|
||||
- pods
|
||||
- pods/exec
|
||||
- pods/log
|
||||
- pods/attach
|
||||
- daemonsets
|
||||
- statefulsets
|
||||
- replicationcontrollers
|
||||
- horizontalpodautoscalers
|
||||
- services
|
||||
- ingresses
|
||||
- persistentvolumeclaims
|
||||
- jobs
|
||||
- cronjobs
|
||||
- secrets
|
||||
- configmaps
|
||||
- serviceaccounts
|
||||
- rolebindings
|
||||
- ingressroutes
|
||||
- middlewares
|
||||
- endpoints
|
||||
- deployments/scale
|
||||
- poddisruptionbudgets
|
||||
- certificates
|
||||
- roles
|
||||
verbs:
|
||||
- "*"
|
||||
- apiGroups:
|
||||
- ""
|
||||
- metrics.k8s.io
|
||||
- rbac.authorization.k8s.io
|
||||
- policy
|
||||
resources:
|
||||
- resourcequotas
|
||||
- roles
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
13
active/kubernetes_userspace/templates/namespace-manager-rolebinding.yaml
Executable file
13
active/kubernetes_userspace/templates/namespace-manager-rolebinding.yaml
Executable file
@@ -0,0 +1,13 @@
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: namespace-manager
|
||||
namespace: {{ .Release.Name }}
|
||||
subjects:
|
||||
- kind: User
|
||||
name: {{ .Values.user }}
|
||||
apiGroup: ""
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: namespace-manager
|
||||
apiGroup: ""
|
||||
@@ -0,0 +1,46 @@
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: namespace-readonly
|
||||
namespace: {{ .Release.Name }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
- extensions
|
||||
- apps
|
||||
- batch
|
||||
- autoscaling
|
||||
- networking.k8s.io
|
||||
- traefik.containo.us
|
||||
- rbac.authorization.k8s.io
|
||||
- metrics.k8s.io
|
||||
- storage.k8s.io
|
||||
resources:
|
||||
- deployments
|
||||
- replicasets
|
||||
- pods
|
||||
- pods/exec
|
||||
- pods/log
|
||||
- pods/attach
|
||||
- daemonsets
|
||||
- statefulsets
|
||||
- replicationcontrollers
|
||||
- horizontalpodautoscalers
|
||||
- services
|
||||
- ingresses
|
||||
- persistentvolumeclaims
|
||||
- jobs
|
||||
- cronjobs
|
||||
- secrets
|
||||
- configmaps
|
||||
- serviceaccounts
|
||||
- rolebindings
|
||||
- ingressroutes
|
||||
- middlewares
|
||||
- resourcequotas
|
||||
- roles
|
||||
- endpoints
|
||||
- clusterroles
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
@@ -0,0 +1,12 @@
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: user-readonly
|
||||
subjects:
|
||||
- kind: User
|
||||
name: {{ .Values.user }}
|
||||
apiGroup: ""
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: user-readonly
|
||||
apiGroup: ""
|
||||
4
active/kubernetes_userspace/templates/namespace.yaml
Executable file
4
active/kubernetes_userspace/templates/namespace.yaml
Executable file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
12
active/kubernetes_userspace/templates/resourcequota.yaml
Executable file
12
active/kubernetes_userspace/templates/resourcequota.yaml
Executable file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: default
|
||||
namespace: {{ .Release.Name }}
|
||||
spec:
|
||||
hard:
|
||||
requests.cpu: "8"
|
||||
requests.memory: "8Gi"
|
||||
limits.cpu: "16"
|
||||
limits.memory: "16Gi"
|
||||
requests.storage: "500Gi"
|
||||
210
active/kubernetes_userspace/userspace.md
Executable file
210
active/kubernetes_userspace/userspace.md
Executable file
@@ -0,0 +1,210 @@
|
||||
# Project Userspace
|
||||
|
||||
## One provisioner to rule them all
|
||||
|
||||
### Quickstart
|
||||
|
||||
```bash
|
||||
./userspace/scripts/setup.sh
|
||||
./userspace/scripts/upsertuser.sh <server_fqdn> <username>
|
||||
./userspace/scripts/removeuser.sh <server_fqdn> <username>
|
||||
```
|
||||
|
||||
### Update a user
|
||||
|
||||
```bash
|
||||
export USER=user
|
||||
helm template $USER ./namespace | kubectl --context admin apply -f -
|
||||
```
|
||||
|
||||
### Objectives
|
||||
|
||||
1. Provision a namespace with clusterroles, rolebindings, and a dedicated nfs-provisioner with one helm chart
|
||||
2. Create an easy way for users to sign their certificates
|
||||
3. Create a cleanup script without deleting user data
|
||||
4. profit
|
||||
|
||||
### Userspace
|
||||
|
||||
#### Namespace
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
```
|
||||
|
||||
#### Roles
|
||||
|
||||
```yaml
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: namespace-manager
|
||||
namespace: {{ .Release.Name }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
- extensions
|
||||
- apps
|
||||
- batch
|
||||
- autoscaling
|
||||
- networking.k8s.io
|
||||
- traefik.containo.us
|
||||
- rbac.authorization.k8s.io
|
||||
- metrics.k8s.io
|
||||
resources:
|
||||
- deployments
|
||||
- replicasets
|
||||
- pods
|
||||
- pods/exec
|
||||
- pods/log
|
||||
- pods/attach
|
||||
- daemonsets
|
||||
- statefulsets
|
||||
- replicationcontrollers
|
||||
- horizontalpodautoscalers
|
||||
- services
|
||||
- ingresses
|
||||
- persistentvolumeclaims
|
||||
- jobs
|
||||
- cronjobs
|
||||
- secrets
|
||||
- configmaps
|
||||
- serviceaccounts
|
||||
- rolebindings
|
||||
- ingressroutes
|
||||
- middlewares
|
||||
- endpoints
|
||||
verbs:
|
||||
- "*"
|
||||
- apiGroups:
|
||||
- ""
|
||||
- metrics.k8s.io
|
||||
- rbac.authorization.k8s.io
|
||||
resources:
|
||||
- resourcequotas
|
||||
- roles
|
||||
verbs:
|
||||
- list
|
||||
```
|
||||
|
||||
#### Rolebinding
|
||||
|
||||
```yaml
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
namespace: {{ .Release.Name }}
|
||||
name: namespace-manager
|
||||
subjects:
|
||||
- kind: User
|
||||
name: {{ .Release.Name }}
|
||||
apiGroup: ""
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: namespace-manager
|
||||
apiGroup: ""
|
||||
```
|
||||
|
||||
### Create a kubernetes certsigner pod
|
||||
|
||||
This keeps the client-ca crt and key secret and allows the cert to be signed and stored on the pod
|
||||
|
||||
#### Create the certsigner secret
|
||||
|
||||
```bash
|
||||
kubectl -n kube-system create secret generic certsigner --from-file /var/lib/rancher/k3s/server/tls/client-ca.crt --from-file /var/lib/rancher/k3s/server/tls/client-ca.key
|
||||
```
|
||||
|
||||
#### Set up the certsigner pod
|
||||
|
||||
```bash
|
||||
kubectl --context admin apply -f certsigner
|
||||
```
|
||||
|
||||
#### Generate a cert
|
||||
|
||||
```bash
|
||||
export USER=<user>
|
||||
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl genrsa -out /$USER/$USER.key 2048
|
||||
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl req -new -key /$USER/$USER.key -out /$USER/$USER.csr -subj "/CN=$USER/O=user"
|
||||
```
|
||||
|
||||
#### Create a new Userspace
|
||||
|
||||
```bash
|
||||
helm template $USER ./namespace | kubectl --context admin apply -f -
|
||||
```
|
||||
|
||||
#### Sign the cert
|
||||
|
||||
```bash
|
||||
export USER=<user>
|
||||
kubectl --context admin cp $(pwd)/users/$USER/$USER.csr certsigner:/certs/$USER.csr
|
||||
kubectl --context admin exec -it --context admin certsigner -- openssl x509 -in /certs/$USER.csr -req -CA /keys/client-ca.crt -CAkey /keys/client-ca.key -CAcreateserial -out /certs/$USER.crt -days 5000
|
||||
kubectl --context admin cp certsigner:/certs/$USER.crt $(pwd)/users/$USER/$USER.crt
|
||||
```
|
||||
|
||||
#### Add to the config
|
||||
|
||||
```bash
|
||||
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
|
||||
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
|
||||
```
|
||||
|
||||
#### Delete
|
||||
|
||||
```bash
|
||||
kubectl config delete-context $USER
|
||||
helm template $USER ./namespace | kubectl --context admin delete -f -
|
||||
```
|
||||
|
||||
### Signing a user cert - detailed notes
|
||||
|
||||
NOTE: ca.crt and ca.key are in /var/lib/rancher/k3s/server/tls/client-ca.*
|
||||
|
||||
```bash
|
||||
# First we create the credentials
|
||||
# /CN=<username> - the user
|
||||
# /O=<group> - the group
|
||||
|
||||
# Navigate to the user directory
|
||||
export USER=<username>
|
||||
cd $USER
|
||||
|
||||
# Generate a private key
|
||||
openssl genrsa -out $USER.key 2048
|
||||
# Check the key
|
||||
# openssl pkey -in ca.key -noout -text
|
||||
# Generate and send me the CSR
|
||||
# The "user" group is my default group
|
||||
openssl req -new -key $USER.key -out $USER.csr -subj "/CN=$USER/O=user"
|
||||
|
||||
# Check the CSR
|
||||
# openssl req -in $USER.csr -noout -text
|
||||
# If satisfactory, sign the CSR
|
||||
# Copy from /var/lib/rancher/k3s/server/tls/client-ca.crt and client-ca.key
|
||||
openssl x509 -req -in $USER.csr -CA ../client-ca.crt -CAkey ../client-ca.key -CAcreateserial -out $USER.crt -days 5000
|
||||
# Review the certificate
|
||||
# openssl x509 -in $USER.crt -text -noout
|
||||
|
||||
# Send back the crt
|
||||
# cp $USER.crt $USER.key ../server-ca.crt ~/.kube/
|
||||
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
|
||||
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
|
||||
|
||||
# Now we create the namespace, rolebindings, and resource quotas
|
||||
# kubectl apply -f k8s/
|
||||
|
||||
# Add the cluster
|
||||
# CA file can be found at https://3.14.3.100:6443/cacerts
|
||||
- cluster:
|
||||
certificate-authority: server-ca.crt
|
||||
server: https://3.14.3.100:6443
|
||||
name: mainframe
|
||||
|
||||
# Test if everything worked
|
||||
kubectl --context=$USER-context get pods
|
||||
```
|
||||
1
active/kubernetes_userspace/values.yaml
Executable file
1
active/kubernetes_userspace/values.yaml
Executable file
@@ -0,0 +1 @@
|
||||
user: admin
|
||||
18
active/kubernetes_wordpress/wordpress.md
Normal file
18
active/kubernetes_wordpress/wordpress.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Wordpress
|
||||
|
||||
The bitnami wordpress chart allows enough customization to
|
||||
work on a custom K3S server. With some tweaks it's quite
|
||||
good. Use the values in `bitnami/wordpress.yaml` as a starting
|
||||
point.
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
wordpress \
|
||||
-f active/kubernetes_bitnami/wordpress.yaml \
|
||||
--set wordpressUsername=admin \
|
||||
--set wordpressPassword=password \
|
||||
--set mariadb.auth.rootPassword=secretpassword \
|
||||
oci://registry-1.docker.io/bitnamicharts/wordpress
|
||||
```
|
||||
51
active/kubernetes_wordpress/wordpress.yaml
Normal file
51
active/kubernetes_wordpress/wordpress.yaml
Normal file
@@ -0,0 +1,51 @@
|
||||
service:
|
||||
# Don't use an external IP address
|
||||
type: ClusterIP
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
pathType: Prefix
|
||||
# Change this
|
||||
hostname: wordpress.reeseapps.com
|
||||
annotations:
|
||||
# Get a cert from letsencrypt
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
# Use the nginx ingress class
|
||||
kubernetes.io/ingress.class: nginx
|
||||
# Allow infinitely large uploads (change this)
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||
nginx.org/client-max-body-size: "0"
|
||||
tls: true
|
||||
|
||||
resources:
|
||||
# namespaces have strict "request" requirements
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
# limits are unbounded - allow some breathing room
|
||||
limits:
|
||||
cpu: 2
|
||||
memory: 1Gi
|
||||
|
||||
updateStrategy:
|
||||
# Since the default storage is single-node mount we can't
|
||||
# use the typical rolling update strategy because the new
|
||||
# pod might try to start on a node without the storage
|
||||
# mounted. We can get around this by tearing down the old
|
||||
# pod before spinning up the new one. This will result in
|
||||
# down time, we can also change the default storage to
|
||||
# fix this.
|
||||
type: Recreate
|
||||
|
||||
# These tolerations ensure that if one of my nodes goes down
|
||||
# for some reason your pods will jump to the next available
|
||||
# node within 1 second of it being unreachable.
|
||||
tolerations:
|
||||
- key: "node.kubernetes.io/unreachable"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
tolerationSeconds: 1
|
||||
1064
active/os_arch/arch-base.md
Normal file
1064
active/os_arch/arch-base.md
Normal file
File diff suppressed because it is too large
Load Diff
922
active/os_arch/arch-workstation.md
Normal file
922
active/os_arch/arch-workstation.md
Normal file
@@ -0,0 +1,922 @@
|
||||
# Workstation
|
||||
|
||||
- [Workstation](#workstation)
|
||||
- [Pacman Packages](#pacman-packages)
|
||||
- [Upgrade/Downgrade](#upgradedowngrade)
|
||||
- [Freeze package](#freeze-package)
|
||||
- [Fingerprint Reader Support](#fingerprint-reader-support)
|
||||
- [Setup](#setup)
|
||||
- [Turn Off Fingerprint When Laptop Lid Closed](#turn-off-fingerprint-when-laptop-lid-closed)
|
||||
- [SSH](#ssh)
|
||||
- [Templates](#templates)
|
||||
- [Ungoogled Chromium](#ungoogled-chromium)
|
||||
- [Ungoogled Chromium AUR](#ungoogled-chromium-aur)
|
||||
- [Ungoogled Chromium Manual Build](#ungoogled-chromium-manual-build)
|
||||
- [Firefox](#firefox)
|
||||
- [Gnome Extensions](#gnome-extensions)
|
||||
- [Avahi (Bonjour)](#avahi-bonjour)
|
||||
- [CUPS Printing](#cups-printing)
|
||||
- [Toolbox](#toolbox)
|
||||
- [Podman](#podman)
|
||||
- [Docker](#docker)
|
||||
- [QEMU/KVM](#qemukvm)
|
||||
- [Arch Guests](#arch-guests)
|
||||
- [Kubernetes](#kubernetes)
|
||||
- [VSCode](#vscode)
|
||||
- [Wireguard](#wireguard)
|
||||
- [Remote Desktop](#remote-desktop)
|
||||
- [Transmission](#transmission)
|
||||
- [VLC](#vlc)
|
||||
- [Bitwarden](#bitwarden)
|
||||
- [Nextcloud](#nextcloud)
|
||||
- [Insomnia](#insomnia)
|
||||
- [QMK](#qmk)
|
||||
- [Initialization](#initialization)
|
||||
- [Development](#development)
|
||||
- [Cura](#cura)
|
||||
- [Creality Print](#creality-print)
|
||||
- [Bambu Studio](#bambu-studio)
|
||||
- [Firewall Rules for LAN Printer](#firewall-rules-for-lan-printer)
|
||||
- [Adding LAN printer via config](#adding-lan-printer-via-config)
|
||||
- [Custom Filament Profiles](#custom-filament-profiles)
|
||||
- [Orca Slicer](#orca-slicer)
|
||||
- [AWS CLI](#aws-cli)
|
||||
- [NSlookup](#nslookup)
|
||||
- [rpi-imager](#rpi-imager)
|
||||
- [qFlipper](#qflipper)
|
||||
- [Nextcloud Talk](#nextcloud-talk)
|
||||
- [FFMpeg](#ffmpeg)
|
||||
- [Youtube-dlp](#youtube-dlp)
|
||||
- [Iperf3](#iperf3)
|
||||
- [Glances](#glances)
|
||||
- [VirtualBox](#virtualbox)
|
||||
- [Email](#email)
|
||||
- [Traffic Usage](#traffic-usage)
|
||||
- [Wine](#wine)
|
||||
- [KDE Connect (GSConnect)](#kde-connect-gsconnect)
|
||||
- [Python](#python)
|
||||
- [Pyenv](#pyenv)
|
||||
- [Poetry](#poetry)
|
||||
- [Note Taking](#note-taking)
|
||||
- [Calculator](#calculator)
|
||||
- [Disk Usqage](#disk-usqage)
|
||||
|
||||
## Pacman Packages
|
||||
|
||||
### Upgrade/Downgrade
|
||||
|
||||
The [Arch Linux Archive](https://archive.archlinux.org/packages/) keeps snapshots of all packages
|
||||
from history. Search for your package on the site, copy the link for the `pkg.tar.zst` file, and run
|
||||
the following:
|
||||
|
||||
```bash
|
||||
# Replace link with the one you copied
|
||||
pacman -U https://archive.archlinux.org/packages/g/gdm/gdm-46.2-1-x86_64.pkg.tar.zst
|
||||
```
|
||||
|
||||
### Freeze package
|
||||
|
||||
You can freeze a package by adding it to the list of ignores in `/etc/pacman.conf`:
|
||||
|
||||
```conf
|
||||
...
|
||||
IgnorePkg = nano vim linux
|
||||
...
|
||||
```
|
||||
|
||||
## Fingerprint Reader Support
|
||||
|
||||
### Setup
|
||||
|
||||
1. `pacman -S fprintd`
|
||||
2. `systemctl enable --now fprintd`
|
||||
3. `fprintd-enroll ducoterra`
|
||||
4. Install <https://aur.archlinux.org/pam-fprint-grosshack.git> to use fingerprint with gnome
|
||||
|
||||
In order to use fingerprint auth with gnome for privileged system stuff with gdm, edit
|
||||
`/etc/pam.d/system-auth` to include `auth sufficient pam_fprintd_grosshack.so`.
|
||||
|
||||
```conf
|
||||
#%PAM-1.0
|
||||
|
||||
auth required pam_shells.so # User must have shell in /etc/shells
|
||||
auth requisite pam_nologin.so # Prevents users from loging in if /etc/nologin exists
|
||||
auth required pam_faillock.so preauth # Timeout after certain number of fails
|
||||
# Optionally use requisite above if you do not want to prompt for the password
|
||||
# on locked accounts.
|
||||
auth sufficient pam_fprintd_grosshack.so
|
||||
-auth [success=2 default=ignore] pam_systemd_home.so
|
||||
auth [success=1 default=bad] pam_unix.so try_first_pass nullok
|
||||
auth [default=die] pam_faillock.so authfail
|
||||
auth optional pam_permit.so
|
||||
auth required pam_env.so
|
||||
auth required pam_faillock.so authsucc
|
||||
# If you drop the above call to pam_faillock.so the lock will be done also
|
||||
# on non-consecutive authentication failures.
|
||||
|
||||
-account [success=1 default=ignore] pam_systemd_home.so
|
||||
account required pam_unix.so
|
||||
account optional pam_permit.so
|
||||
account required pam_time.so
|
||||
|
||||
-password [success=1 default=ignore] pam_systemd_home.so
|
||||
password required pam_unix.so try_first_pass nullok shadow
|
||||
password optional pam_permit.so
|
||||
|
||||
-session optional pam_systemd_home.so
|
||||
session required pam_limits.so
|
||||
session required pam_unix.so
|
||||
session optional pam_permit.so
|
||||
```
|
||||
|
||||
### Turn Off Fingerprint When Laptop Lid Closed
|
||||
|
||||
**NOTE: This may break fingerprint unlock. Testing in progress.**
|
||||
|
||||
To disable fingerprint authentication when the laptop lid is closed, and re-enable when it is
|
||||
reopened, we will use acpid to bind to the button/lid.* event to a custom script that will comment
|
||||
out fprintd auth in /etc/pam.d/sudo.
|
||||
|
||||
Usually we'd just `systemctl mask fprintd` but this breaks gdm (as of 08/06/23). See
|
||||
<https://gitlab.gnome.org/GNOME/gnome-shell/-/merge_requests/2267> and
|
||||
<https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/6585>.
|
||||
|
||||
1. `pacman -S acpid` and then `systemctl enable --now acpid`
|
||||
2. Create file /etc/acpi/laptop-lid.sh with the following contents:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
if grep -Fq closed /proc/acpi/button/lid/LID0/state # &&
|
||||
# This is used to detect if a display is connected.
|
||||
# For USB C displayport use:
|
||||
# grep -Fxq connected /sys/class/drm/card1-DP-2/status
|
||||
# For hdmi use:
|
||||
# grep -Fxq connected /sys/class/drm/card0-HDMI-A-1/status
|
||||
then
|
||||
# comment out fprintd
|
||||
sed -i -E 's/^([^#].*pam_fprintd.so)/#\1/g' /etc/pam.d/sudo
|
||||
else
|
||||
# uncomment fprintd
|
||||
sed -i -E 's/#(.*pam_fprintd.so)/\1/g' /etc/pam.d/sudo
|
||||
|
||||
fi
|
||||
```
|
||||
|
||||
3. Make the file executable with
|
||||
|
||||
`chmod +x /etc/acpi/laptop-lid.sh`
|
||||
|
||||
4. Create file /etc/acpi/events/laptop-lid with the following contents:
|
||||
|
||||
```bash
|
||||
event=button/lid.*
|
||||
action=/etc/acpi/laptop-lid.sh
|
||||
```
|
||||
|
||||
5. Restart the acpid service with:
|
||||
|
||||
`systemctl restart acpid`
|
||||
|
||||
Now the fingerprint will be used only when the lid is open.
|
||||
|
||||
In order to ensure the correct state after suspend we need a service file which runs our script on
|
||||
wake.
|
||||
|
||||
1. Create a file named /etc/systemd/system/laptop-lid.service with the following contents:
|
||||
|
||||
```bash
|
||||
[Unit]
|
||||
Description=Laptop Lid
|
||||
After=suspend.target
|
||||
|
||||
[Service]
|
||||
ExecStart=/etc/acpi/laptop-lid.sh
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
WantedBy=suspend.target
|
||||
```
|
||||
|
||||
2. Reload the systemd config files with
|
||||
|
||||
`sudo systemctl daemon-reload`
|
||||
|
||||
3. Start and enable the service with
|
||||
|
||||
`sudo systemctl enable --now laptop-lid.service`
|
||||
|
||||
Now the status should be correct even after connecting/disconnecting when the computer is off.
|
||||
|
||||
## SSH
|
||||
|
||||
Generate a key with password protection:
|
||||
|
||||
```bash
|
||||
# Omit "-N 'password'" to have it prompt you
|
||||
ssh-keygen -f ~/.ssh/test-key -N 'PASSWORD'
|
||||
```
|
||||
|
||||
Change the password for an ssh key:
|
||||
|
||||
```bash
|
||||
# Use "-N ''" to remove the password
|
||||
ssh-keygen -p -N 'PASSWORD' -f ~/.ssh/test-key
|
||||
```
|
||||
|
||||
This is an example config entry in `~/.ssh/config`:
|
||||
|
||||
```conf
|
||||
Host my-host
|
||||
Hostname my-host.reeselink.com
|
||||
User root
|
||||
ProxyCommand none
|
||||
ForwardAgent no
|
||||
ForwardX11 no
|
||||
Port 22
|
||||
KeepAlive yes
|
||||
IdentityFile ~/.ssh/id_my-host_rsa
|
||||
```
|
||||
|
||||
You can ssh to that host with `ssh my-host` after adding a config entry.
|
||||
|
||||
## Templates
|
||||
|
||||
You can add files in `~/Templates` to give yourself quick-create options in the gnome
|
||||
file browser context menu.
|
||||
|
||||
```bash
|
||||
mkdir ~/Templates
|
||||
touch ~/Templates/text.txt
|
||||
```
|
||||
|
||||
## Ungoogled Chromium
|
||||
|
||||
<https://github.com/ungoogled-software/ungoogled-chromium-archlinux>
|
||||
|
||||
### Ungoogled Chromium AUR
|
||||
|
||||
<https://aur.archlinux.org/packages/ungoogled-chromium-bin>
|
||||
|
||||
Make sure to `pacman -S gnome-browser-connector` and grab the [Gnome Shell Integration](https://chromewebstore.google.com/detail/gnome-shell-integration/gphhapmejobijbbhgpjhcjognlahblep)
|
||||
|
||||
Install the [chromium-web-store](https://github.com/NeverDecaf/chromium-web-store) extension to use
|
||||
chrome web store extensions.
|
||||
|
||||
### Ungoogled Chromium Manual Build
|
||||
|
||||
<https://github.com/ungoogled-software/ungoogled-chromium-archlinux>
|
||||
|
||||
```bash
|
||||
# Install required dependencies. Make sure your user has access to sudo
|
||||
sudo pacman -S base-devel
|
||||
|
||||
# Clone this repository
|
||||
git clone https://github.com/ungoogled-software/ungoogled-chromium-archlinux
|
||||
|
||||
# Navigate into the repository
|
||||
cd ungoogled-chromium-archlinux
|
||||
|
||||
# Check out the latest tag
|
||||
git checkout $(git describe --abbrev=0 --tags)
|
||||
|
||||
# Start the build, this will download all necessary dependencies automatically
|
||||
makepkg -s
|
||||
|
||||
# Install
|
||||
makepkg --install
|
||||
```
|
||||
|
||||
## Firefox
|
||||
|
||||
You'll want firefox and gnome-browser-connector (for gnome extension management).
|
||||
|
||||
```bash
|
||||
pacman -S firefox gnome-browser-connector
|
||||
```
|
||||
|
||||
Choose noto-fonts
|
||||
|
||||
### Gnome Extensions
|
||||
|
||||
1. AlphabeticalAppGrid@stuarthayhurst
|
||||
2. <Vitals@CoreCoding.com>
|
||||
3. <dash-to-dock@micxgx.gmail.com>
|
||||
4. <tactile@lundal.io>
|
||||
5. GSConnect
|
||||
|
||||
## Avahi (Bonjour)
|
||||
|
||||
1. `pacman -S avahi`
|
||||
2. `vim /etc/nsswitch.conf`
|
||||
|
||||
```conf
|
||||
hosts: mymachines mdns [NOTFOUND=return] resolve [!UNAVAIL=return] files myhostname dns
|
||||
```
|
||||
|
||||
3. `vim /etc/mdns.allow`
|
||||
|
||||
```conf
|
||||
.local.
|
||||
.local
|
||||
```
|
||||
|
||||
## CUPS Printing
|
||||
|
||||
Note: you need [avahi](#avahi-bonjour) for auto-discovery.
|
||||
|
||||
1. `pacman -S cups cups-pdf system-config-printer gutenprint foomatic-db-gutenprint-ppds`
|
||||
2. `cups-genppdupdate`
|
||||
3. `usermod -aG lp ducoterra`
|
||||
4. `systemctl enable --now cups`
|
||||
|
||||
To add a new printer:
|
||||
|
||||
<https://github.com/OpenPrinting/cups/?tab=readme-ov-file#setting-up-printers>
|
||||
|
||||
`lpadmin -p printer-name -E -v "ipp://1.2.3.4/ipp/print" -m everywhere`
|
||||
|
||||
## Toolbox
|
||||
|
||||
<https://wiki.archlinux.org/title/Toolbox>
|
||||
|
||||
Toolbox is a containerized workstation service via podman.
|
||||
|
||||
```bash
|
||||
# select "crun" when prompted
|
||||
pacman -S toolbox
|
||||
|
||||
toolbox create
|
||||
|
||||
toolbox enter
|
||||
|
||||
sudo pacman -S zsh grml-zsh-config zsh-syntax-highlighting zsh-autosuggestions pkgfile
|
||||
```
|
||||
|
||||
## Podman
|
||||
|
||||
Install with the following
|
||||
|
||||
`pacman -S podman buildah cni-plugins slirp4netns podman-dnsname aardvark-dns`
|
||||
|
||||
Then you can run rootless containers like so:
|
||||
|
||||
```bash
|
||||
podman pull docker.io/library/python:3.11
|
||||
podman run -it python:3.11 bash
|
||||
|
||||
podman network create test
|
||||
podman pod create --network test --publish 8000:8000 test1
|
||||
podman run -it --pod test1 python:3.11 bash
|
||||
```
|
||||
|
||||
You can also deploy pods with kubernetes yamls.
|
||||
|
||||
```bash
|
||||
podman network create test
|
||||
podman kube play --network test podman-deploy.yaml --replace
|
||||
```
|
||||
|
||||
## Docker
|
||||
|
||||
```bash
|
||||
pacman -Sy docker docker-compose
|
||||
usermod -aG docker ducoterra
|
||||
```
|
||||
|
||||
logout, log back in to use docker as non-root user.
|
||||
|
||||
You can use btrfs as your storage driver by following these instructions:
|
||||
|
||||
<https://docs.docker.com/storage/storagedriver/btrfs-driver/>
|
||||
|
||||
## QEMU/KVM
|
||||
|
||||
Install virtualization capabilties
|
||||
|
||||
```bash
|
||||
# DNSMasq is required - do not start it with systemd, qemu will handle that.
|
||||
pacman -S qemu-full dnsmasq virt-manager
|
||||
systemctl enable --now libvirtd
|
||||
virsh net-autostart default
|
||||
```
|
||||
|
||||
Then edit `/etc/libvirt/network.conf` and add:
|
||||
|
||||
```conf
|
||||
firewall_backend="iptables"
|
||||
```
|
||||
|
||||
Make sure to restart libvirtd with `systemctl restart libvirtd`.
|
||||
|
||||
If you get a blank screen when launching a VM check that you've used the correct bios -
|
||||
either secboot or not secboot. This is the most common problem.
|
||||
|
||||
### Arch Guests
|
||||
|
||||
In order to get drivers for spice you'll need the guest spice drivers:
|
||||
|
||||
```bash
|
||||
sudo pacman -S qemu-guest-agent spice-vdagent
|
||||
```
|
||||
|
||||
## Kubernetes
|
||||
|
||||
```bash
|
||||
pacman -S kubectl helm
|
||||
```
|
||||
|
||||
## VSCode
|
||||
|
||||
For the open source version of code install `code`:
|
||||
|
||||
```bash
|
||||
sudo pacman -S code
|
||||
```
|
||||
|
||||
You'll probably also want to enable default vscode marketplace extensions (like pylance):
|
||||
|
||||
See Arch wiki here: <https://wiki.archlinux.org/title/Visual_Studio_Code#Extensions_support>
|
||||
|
||||
Code Marketplace: <https://aur.archlinux.org/packages/code-marketplace>
|
||||
Pylance Support: <https://aur.archlinux.org/packages/code-features>
|
||||
|
||||
This version of code does not render with wayland by default. If using
|
||||
fractional scaling this causes blurriness. To fix this you'll need to modify the
|
||||
.desktop file and add the wayland argument:
|
||||
|
||||
```bash
|
||||
cp /usr/share/applications/code-oss.desktop ~/.local/share/applications/
|
||||
vim ~/.local/share/applications/code-oss.desktop
|
||||
```
|
||||
|
||||
Add `--ozone-platform=wayland` to the `Exec` section:
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
...
|
||||
Exec=code-oss --ozone-platform=wayland %F
|
||||
...
|
||||
[Desktop Action new-empty-window]
|
||||
...
|
||||
Exec=code-oss --ozone-platform=wayland --new-window %F
|
||||
...
|
||||
```
|
||||
|
||||
For the proprietary version of vscode use the AUR:
|
||||
|
||||
<https://aur.archlinux.org/packages/visual-studio-code-bin>
|
||||
|
||||
```bash
|
||||
cd ~/aur
|
||||
git clone https://aur.archlinux.org/visual-studio-code-bin.git
|
||||
cd visual-studio-code-bin
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
## Wireguard
|
||||
|
||||
Wireguard requires `linux-headers`. If that isn't installed or is misconfigured your
|
||||
vpn likely won't activate.
|
||||
|
||||
```bash
|
||||
pacman -S wireguard-tools
|
||||
```
|
||||
|
||||
## Remote Desktop
|
||||
|
||||
```bash
|
||||
pacman -S remmina freerdp
|
||||
```
|
||||
|
||||
## Transmission
|
||||
|
||||
```bash
|
||||
pacman -S gtk4 transmission-gtk
|
||||
```
|
||||
|
||||
## VLC
|
||||
|
||||
```bash
|
||||
pacman -S vlc
|
||||
```
|
||||
|
||||
## Bitwarden
|
||||
|
||||
```bash
|
||||
pacman -S bitwarden
|
||||
```
|
||||
|
||||
Enable fractional scaling support:
|
||||
|
||||
```bash
|
||||
cp /usr/share/applications/bitwarden.desktop ~/.local/share/applications/
|
||||
vim ~/.local/share/applications/bitwarden.desktop
|
||||
```
|
||||
|
||||
bitwarden.desktop
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
...
|
||||
Exec=bitwarden-desktop --ozone-platform=wayland
|
||||
...
|
||||
```
|
||||
|
||||
## Nextcloud
|
||||
|
||||
<https://wiki.archlinux.org/title/Nextcloud#Desktop>
|
||||
|
||||
```bash
|
||||
pacman -S nextcloud-client
|
||||
```
|
||||
|
||||
For app icon support, install <https://extensions.gnome.org/extension/615/appindicator-support/>
|
||||
|
||||
## Insomnia
|
||||
|
||||
<https://github.com/Kong/insomnia/releases/tag/core@2023.5.7>
|
||||
|
||||
```bash
|
||||
mv ~/Downloads/Insomnia*.AppImage ~/Applications/Insomnia.AppImage
|
||||
chmod +x ~/Applications/*.AppImage
|
||||
```
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
Name=Insomnia
|
||||
Exec=/home/ducoterra/Applications/Insomnia.AppImage
|
||||
Icon=/home/ducoterra/.icons/insomnia.png
|
||||
Type=Application
|
||||
```
|
||||
|
||||
## QMK
|
||||
|
||||
### Initialization
|
||||
|
||||
I have a mirror and a fork of the mirror on my personal Gitea. For this strategy you'll
|
||||
need to checkout the fork and add the mirror. This ensures I'll always have an up-to-date
|
||||
mirror of qmk while also giving me a repo to make changes for my personal keyboards.
|
||||
|
||||
```bash
|
||||
git clone git@gitea.reeseapps.com:ducoterra/qmk_firmware.git
|
||||
cd qmk_firmware
|
||||
git remote add mirror git@gitea.reeseapps.com:mirrors/qmk_firmware.git
|
||||
git fetch mirror
|
||||
git rebase mirror/master
|
||||
pacman -S qmk
|
||||
qmk setup
|
||||
sudo cp /home/ducoterra/qmk_firmware/util/udev/50-qmk.rules /etc/udev/rules.d/
|
||||
qmk config user.keyboard=keychron/q11/ansi_encoder
|
||||
qmk config user.keymap=ducoterra
|
||||
```
|
||||
|
||||
### Development
|
||||
|
||||
Every time you start a project you'll want to sync with the mirror.
|
||||
|
||||
```bash
|
||||
git fetch mirror
|
||||
git rebase mirror/master
|
||||
```
|
||||
|
||||
Commit to master while you're in the fork.
|
||||
|
||||
## Cura
|
||||
|
||||
<https://ultimaker.com/software/ultimaker-cura/#links>
|
||||
|
||||
```bash
|
||||
mv ~/Downloads/*Cura*.AppImage ~/Applications/Cura.AppImage
|
||||
chmod +x ~/Applications/*.AppImage
|
||||
```
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
Name=Cura
|
||||
Exec=/home/ducoterra/Applications/Cura.AppImage
|
||||
Icon=/home/ducoterra/.icons/cura.png
|
||||
Type=Application
|
||||
```
|
||||
|
||||
## Creality Print
|
||||
|
||||
<https://www.creality.com/pages/download-software?spm=..page_11657537.creality_print_1.1>
|
||||
|
||||
```bash
|
||||
mv ~/Downloads/Creality_Print*.AppImage ~/Applications/Creality_Print.AppImage
|
||||
chmod +x ~/Applications/*.AppImage
|
||||
```
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
Name=Creality Print
|
||||
Exec=/home/ducoterra/Applications/Creality_Print.AppImage
|
||||
Icon=/home/ducoterra/.icons/creality_print.png
|
||||
Type=Application
|
||||
```
|
||||
|
||||
## Bambu Studio
|
||||
|
||||
Install with flatpak.
|
||||
|
||||
```bash
|
||||
flatpak install com.bambulab.BambuStudio
|
||||
```
|
||||
|
||||
### Firewall Rules for LAN Printer
|
||||
|
||||
For local LAN discovery allow 2021/udp
|
||||
|
||||
```bash
|
||||
sudo ufw allow 2021/udp
|
||||
sudo ufw reload
|
||||
```
|
||||
|
||||
### Adding LAN printer via config
|
||||
|
||||
The config is located at `~/.var/app/com.bambulab.BambuStudio/config/BambuStudio/BambuStudio.conf`
|
||||
|
||||
At the very top of the config you can add a pin for a printer permanently with:
|
||||
|
||||
```json
|
||||
"access_code": {
|
||||
"printer serial number": "access code here"
|
||||
},
|
||||
```
|
||||
|
||||
### Custom Filament Profiles
|
||||
|
||||
Custom profiles are located at
|
||||
`.var/app/com.bambulab.BambuStudio/config/BambuStudio/user/default/filament/base`
|
||||
|
||||
Sync this with something like Nextcloud.
|
||||
|
||||
|
||||
## Orca Slicer
|
||||
|
||||
<https://github.com/SoftFever/OrcaSlicer>
|
||||
|
||||
This is an open source fork of Bambu Slicer with more features.
|
||||
|
||||
```bash
|
||||
# You might need to install webkit2gtk
|
||||
pacman -S webkit2gtk
|
||||
```
|
||||
|
||||
```bash
|
||||
mv ~/Downloads/OrcaSlicer*.AppImage ~/Applications/OrcaSlicer.AppImage
|
||||
chmod +x ~/Applications/*.AppImage
|
||||
```
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
Name=Orca Slicer
|
||||
Exec=/home/ducoterra/Applications/OrcaSlicer.AppImage
|
||||
Icon=/home/ducoterra/.icons/orca_slicer.png
|
||||
Type=Application
|
||||
```
|
||||
|
||||
## AWS CLI
|
||||
|
||||
<https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html>
|
||||
|
||||
```bash
|
||||
# Install less if you don't have it already
|
||||
pacman -S less
|
||||
|
||||
cd ~/Downloads
|
||||
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
|
||||
unzip awscliv2.zip
|
||||
sudo ./aws/install
|
||||
```
|
||||
|
||||
Add the following to your .zshrc:
|
||||
|
||||
```bash
|
||||
complete -C '/usr/local/bin/aws_completer' aws
|
||||
```
|
||||
|
||||
## NSlookup
|
||||
|
||||
```bash
|
||||
# Do this in a toolbox
|
||||
toolbox enter
|
||||
|
||||
# Install
|
||||
pacman -S bind
|
||||
```
|
||||
|
||||
## rpi-imager
|
||||
|
||||
<https://github.com/raspberrypi/rpi-imager>
|
||||
|
||||
```bash
|
||||
toolbox create -d ubuntu -r 24.04
|
||||
toolbox enter toolbox enter ubuntu-toolbox-24.04
|
||||
sudo apt install rpi-imager
|
||||
```
|
||||
|
||||
## qFlipper
|
||||
|
||||
<https://flipperzero.one/update>
|
||||
|
||||
```bash
|
||||
mv ~/Downloads/*qFlipper*.AppImage ~/Applications/qFlipper.AppImage
|
||||
chmod +x ~/Applications/*.AppImage
|
||||
```
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
Name=qFlipper
|
||||
Exec=/home/ducoterra/Applications/qFlipper.AppImage
|
||||
Icon=/home/ducoterra/.icons/qFlipper.png
|
||||
Type=Application
|
||||
```
|
||||
|
||||
## Nextcloud Talk
|
||||
|
||||
<https://github.com/nextcloud-releases/talk-desktop/releases>
|
||||
|
||||
```bash
|
||||
unzip ~/Downloads/Nextcloud.Talk-linux*.zip -d ~/Downloads
|
||||
rm -rf ~/Applications/NextcloudTalk
|
||||
mv ~/Downloads/'Nextcloud Talk-linux-x64' ~/Applications/NextcloudTalk
|
||||
```
|
||||
|
||||
vim ~/.local/share/applications/nextcloud-talk.desktop
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
Name=Nextcloud Talk
|
||||
Exec="/home/ducoterra/Applications/NextcloudTalk/Nextcloud Talk" --ozone-platform=wayland %U
|
||||
Icon=/home/ducoterra/.icons/NextcloudTalk.png
|
||||
Type=Application
|
||||
```
|
||||
|
||||
```bash
|
||||
update-desktop-database
|
||||
```
|
||||
|
||||
## FFMpeg
|
||||
|
||||
```bash
|
||||
# Select pipewire-jack when prompted
|
||||
pacman -S ffmpeg
|
||||
```
|
||||
|
||||
## Youtube-dlp
|
||||
|
||||
<https://github.com/yt-dlp/yt-dlp>
|
||||
|
||||
1. Download `yt-dlp_linux`
|
||||
2. `clamdscan yt-dlp_linux`
|
||||
3. `cp yt-dlp_linux /usr/local/bin/yt-dlp`
|
||||
4. Install ffmpeg `pacman -S ffmpeg`
|
||||
|
||||
Download the best quality video:
|
||||
|
||||
```bash
|
||||
yt-dlp -f "bv+ba/b" https://...
|
||||
```
|
||||
|
||||
Download a playlist:
|
||||
|
||||
```bash
|
||||
yt-dlp -f "bv+ba/b" --write-thumbnail https://www.youtube.com/watch?v=l-unefmAo9k&list=PLuYLhuXt4HrQqnfSceITmv6T_drx1hN84
|
||||
```
|
||||
|
||||
## Iperf3
|
||||
|
||||
```bash
|
||||
pacman -S iperf3
|
||||
```
|
||||
|
||||
## Glances
|
||||
|
||||
```bash
|
||||
pacman -S glances
|
||||
```
|
||||
|
||||
## VirtualBox
|
||||
|
||||
<https://wiki.archlinux.org/title/VirtualBox>
|
||||
|
||||
For the linux kernel, choose virtualbox-host-modules-arch
|
||||
|
||||
```bash
|
||||
pacman -S virtualbox
|
||||
|
||||
# Required reboot to load the kernel modules
|
||||
reboot
|
||||
```
|
||||
|
||||
## Email
|
||||
|
||||
- Download Proton Mail Bridge PKGBUILD: <https://proton.me/mail/bridge>
|
||||
|
||||
```bash
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
- Open protonmail bridge and login
|
||||
- Install geary email client
|
||||
|
||||
```bash
|
||||
pacman -S geary
|
||||
```
|
||||
|
||||
- Open geary
|
||||
- Add the account following protonmail bridge's instructions
|
||||
|
||||
## Traffic Usage
|
||||
|
||||
Nethogs shows per-app network utilization.
|
||||
|
||||
```bash
|
||||
pacman -S nethogs
|
||||
|
||||
# You'll need to run this with sudo if you aren't root
|
||||
nethogs
|
||||
```
|
||||
|
||||
## Wine
|
||||
|
||||
```bash
|
||||
pacman -S wine
|
||||
```
|
||||
|
||||
You can adjust the dpi scaling for wine with `winecfg`.
|
||||
|
||||
## KDE Connect (GSConnect)
|
||||
|
||||
Install the GSConnect extension for Gnome.
|
||||
|
||||
Open the firewall for connecting devices <https://userbase.kde.org/KDEConnect#Troubleshooting>
|
||||
|
||||
```bash
|
||||
sudo ufw allow 1714:1764/udp
|
||||
sudo ufw allow 1714:1764/tcp
|
||||
sudo ufw reload
|
||||
```
|
||||
|
||||
## Python
|
||||
|
||||
### Pyenv
|
||||
|
||||
<https://github.com/pyenv/pyenv?tab=readme-ov-file#installation>
|
||||
|
||||
```bash
|
||||
curl https://pyenv.run | bash
|
||||
```
|
||||
|
||||
Add to `~/.zshrc`:
|
||||
|
||||
```bash
|
||||
export PYENV_ROOT="$HOME/.pyenv"
|
||||
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
|
||||
eval "$(pyenv init -)"
|
||||
```
|
||||
|
||||
Install and use a Python version:
|
||||
|
||||
```bash
|
||||
pyenv install 3.13
|
||||
pyenv global 3.13
|
||||
```
|
||||
|
||||
### Poetry
|
||||
|
||||
<https://python-poetry.org/docs/>
|
||||
|
||||
```bash
|
||||
python -m pip install --user pipx
|
||||
python -m pipx ensurepath
|
||||
|
||||
pipx install poetry
|
||||
pipx ensurepath # source ~/.zshrc or ~/.bashrc
|
||||
```
|
||||
|
||||
Create a new project in the current directory
|
||||
|
||||
```bash
|
||||
poetry new .
|
||||
```
|
||||
|
||||
## Note Taking
|
||||
|
||||
```bash
|
||||
flatpak install org.kde.marknote
|
||||
```
|
||||
|
||||
## Calculator
|
||||
|
||||
```bash
|
||||
flatpak install org.gnome.Calculator
|
||||
```
|
||||
|
||||
## Disk Usqage
|
||||
|
||||
```bash
|
||||
flatpak install org.gnome.baobab
|
||||
```
|
||||
9
active/os_arch/arch_support_files/95-systemd-boot.hook
Normal file
9
active/os_arch/arch_support_files/95-systemd-boot.hook
Normal file
@@ -0,0 +1,9 @@
|
||||
[Trigger]
|
||||
Type = Package
|
||||
Operation = Upgrade
|
||||
Target = systemd
|
||||
|
||||
[Action]
|
||||
Description = Gracefully upgrading systemd-boot...
|
||||
When = PostTransaction
|
||||
Exec = /usr/bin/systemctl restart systemd-boot-update.service
|
||||
4
active/os_arch/arch_support_files/arch.conf
Normal file
4
active/os_arch/arch_support_files/arch.conf
Normal file
@@ -0,0 +1,4 @@
|
||||
title Arch Linux
|
||||
linux /vmlinuz-linux
|
||||
initrd /initramfs-linux.img
|
||||
options quiet splash rd.luks.name=UUID=root root=/dev/mapper/root rootflags=subvol=root nvme.noacpi=1 acpi_osi="!Windows 2020" mem_sleep_default="deep" rw
|
||||
4
active/os_arch/arch_support_files/loader.conf
Normal file
4
active/os_arch/arch_support_files/loader.conf
Normal file
@@ -0,0 +1,4 @@
|
||||
default arch.conf
|
||||
timeout 4
|
||||
console-mode max
|
||||
editor no
|
||||
1032
active/os_arch/ath12k/ath12k-fw-repo
Executable file
1032
active/os_arch/ath12k/ath12k-fw-repo
Executable file
File diff suppressed because it is too large
Load Diff
BIN
active/os_arch/ath12k/board.bin
Normal file
BIN
active/os_arch/ath12k/board.bin
Normal file
Binary file not shown.
BIN
active/os_arch/ath12k/regdb.bin
Normal file
BIN
active/os_arch/ath12k/regdb.bin
Normal file
Binary file not shown.
61
active/os_arch/deprecated.md
Normal file
61
active/os_arch/deprecated.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Deprecated
|
||||
|
||||
## Firejail
|
||||
|
||||
Don't use firejail, it's a suid binary which only runs in userspace. Apparmor does
|
||||
almost exactly the same thing but runs in the kernel at boot and protects you more
|
||||
completely. I'm leaving this here in case you're interested but realistically you
|
||||
should just learn apparmor.
|
||||
|
||||
Firejail launches supported applications in a sandboxed environment where it limits access
|
||||
to system files and resources.
|
||||
|
||||
For example:
|
||||
|
||||
- Firefox will not be able to access more than a small subset of your home directory.
|
||||
- VSCode will not be able to acces ~/.config/autostart.
|
||||
|
||||
1. `sudo pacman -S firejail`
|
||||
2. `sudo apparmor_parser -r /etc/apparmor.d/firejail-default`
|
||||
3. `sudo firecfg`
|
||||
4. `firecfg --fix`
|
||||
5. `sudo rm /usr/local/bin/dnsmasq` (this fixes an issue with virsh network start)
|
||||
6. Add a pacman hook to apply firejail on install
|
||||
|
||||
/etc/pacman.d/hooks/firejail.hook
|
||||
|
||||
```conf
|
||||
[Trigger]
|
||||
Type = Path
|
||||
Operation = Install
|
||||
Operation = Upgrade
|
||||
Operation = Remove
|
||||
Target = usr/bin/*
|
||||
Target = usr/share/applications/*.desktop
|
||||
|
||||
[Action]
|
||||
Description = Configure symlinks in /usr/local/bin based on firecfg.config...
|
||||
When = PostTransaction
|
||||
Depends = firejail
|
||||
Exec = /bin/sh -c 'firecfg >/dev/null 2>&1'
|
||||
```
|
||||
|
||||
You can run firejail with noprofile to fix access issues (like firefox gnome connector)
|
||||
|
||||
```bash
|
||||
firejail --noprofile firefox
|
||||
```
|
||||
|
||||
You'll probably want to enable the following
|
||||
|
||||
`sudo cat /etc/firejail/firejail.config | grep -e '^[^#].*'`
|
||||
|
||||
/etc/firejail/firejail.config
|
||||
|
||||
```conf
|
||||
browser-disable-u2f no
|
||||
chroot yes
|
||||
firejail-prompt yes
|
||||
force-nonewprivs yes
|
||||
tracelog yes
|
||||
```
|
||||
511
active/os_arch/gaming.md
Normal file
511
active/os_arch/gaming.md
Normal file
@@ -0,0 +1,511 @@
|
||||
# Gaming
|
||||
|
||||
- [Gaming](#gaming)
|
||||
- [Discord](#discord)
|
||||
- [Steam](#steam)
|
||||
- [autostart](#autostart)
|
||||
- [mangohud](#mangohud)
|
||||
- [Scaling](#scaling)
|
||||
- [Streaming](#streaming)
|
||||
- [FSR](#fsr)
|
||||
- [Sunshine and Moonlight](#sunshine-and-moonlight)
|
||||
- [Install Sunshine](#install-sunshine)
|
||||
- [Ports](#ports)
|
||||
- [Install Moonlight](#install-moonlight)
|
||||
- [Flatpak](#flatpak)
|
||||
- [Commands](#commands)
|
||||
- [Configuration](#configuration)
|
||||
- [ProtonUp-QT](#protonup-qt)
|
||||
- [VSCode](#vscode)
|
||||
- [XWayland](#xwayland)
|
||||
- [Wine](#wine)
|
||||
- [Spotify](#spotify)
|
||||
- [VLC](#vlc)
|
||||
- [Remote Desktop](#remote-desktop)
|
||||
- [OBS](#obs)
|
||||
- [Xbox Controller](#xbox-controller)
|
||||
- [Mangohud](#mangohud-1)
|
||||
- [Minecraft with Mangohud](#minecraft-with-mangohud)
|
||||
- [ffmpeg](#ffmpeg)
|
||||
- [AUR](#aur)
|
||||
- [Intel 11th gen](#intel-11th-gen)
|
||||
- [AMD 7900xtx](#amd-7900xtx)
|
||||
- [Taking Game Cips](#taking-game-cips)
|
||||
- [MPV](#mpv)
|
||||
- [Minecraft](#minecraft)
|
||||
- [Launcher](#launcher)
|
||||
- [MultiMC](#multimc)
|
||||
|
||||
## Discord
|
||||
|
||||
```bash
|
||||
pacman -S discord
|
||||
```
|
||||
|
||||
## Steam
|
||||
|
||||
<https://wiki.archlinux.org/title/Official_repositories#multilib>
|
||||
|
||||
Edit /etc/pacman.conf
|
||||
|
||||
```conf
|
||||
[multilib]
|
||||
Include = /etc/pacman.d/mirrorlist
|
||||
```
|
||||
|
||||
```bash
|
||||
pacman -S steam
|
||||
```
|
||||
|
||||
When prompted use vulkan-radeon on AMD and vulkan-intel on intel.
|
||||
|
||||
### autostart
|
||||
|
||||
```bash
|
||||
ln -s ~/.local/share/applications/steam-native.desktop ~/.config/autostart/
|
||||
```
|
||||
|
||||
### mangohud
|
||||
|
||||
Start steam with mangohud
|
||||
|
||||
```bash
|
||||
pacman -S mangohud lib32-mangohud
|
||||
cp /usr/share/applications/steam.desktop ~/.local/share/applications/steam.desktop
|
||||
```
|
||||
|
||||
Edit ~/.local/share/applications/steam.desktop:
|
||||
|
||||
```conf
|
||||
Exec=/usr/bin/mangohud /usr/bin/steam-native %U
|
||||
```
|
||||
|
||||
### Scaling
|
||||
|
||||
On HiDPI screens you might need to manually scale the steam interface. This can be done by editing
|
||||
the .desktop file:
|
||||
|
||||
```bash
|
||||
cp /usr/share/applications/steam.desktop ~/.local/share/applications/steam.desktop
|
||||
```
|
||||
|
||||
Edit `~/.local/share/applications/steam.desktop`
|
||||
|
||||
```conf
|
||||
Exec=/usr/bin/mangohud /usr/bin/steam-runtime -forcedesktopscaling 2 %U
|
||||
```
|
||||
|
||||
### Streaming
|
||||
|
||||
See [Sunshine and Moonlight](#sunshine-and-moonlight) first! It's much better than steam streaming.
|
||||
|
||||
Works great from Arch hosts to Arch guests with a little configuration.
|
||||
|
||||
1. wifi
|
||||
|
||||
Your wifi should be isolated to the fastest band you have available at the widest channel width on
|
||||
the least populated channel. BSS Transition and Fast Roaming are the only settings I enable since
|
||||
they do make a difference for video calls and streaming while moving from room to room but don't
|
||||
cause interference or connectivity issues like the other settings.
|
||||
|
||||

|
||||
|
||||
2. host
|
||||
|
||||
Your host settings don't matter too much since you'll be limited to Steam's compatibility with
|
||||
Arch's mesa drivers, the current kernel version, and whether Mercury is retrograde. Steam does a
|
||||
pretty good job automatically selecting the correct libraries and capture mechanism. Here are
|
||||
the settings I use:
|
||||
|
||||

|
||||
|
||||
3. Client
|
||||
|
||||
Here's where things start to matter. As of kernel `6.7.6-arch1-2`, `mesa 1:24.0.2-1`,
|
||||
`libva 2.20.0-1`, `mesa-vdpau 1:24.0.2-1`, and `libvdpau 1.5-2` hardware decoding works on Arch
|
||||
with AMD integrated graphics on both my framework (Ryzen 7 7840U w/ Radeon 780M Graphics) and my
|
||||
Steam Deck.
|
||||
|
||||
In the steam advanced client settings select "Enhanced 4k" to start. Do not change
|
||||
the Resolution limit. Decoding at the native resolution of your screen will always perform
|
||||
better than using a non-native resolution. In my testing even lower resolutions result in
|
||||
20-30ms of additional delay over native. Framerate limit should stay at automatic. This will try
|
||||
to match the streaming framerate to your display's refresh rate. You can set this to 60 if
|
||||
things are lagging too much. Bandwidth limit can be adjusted up and down to fit your wifi's
|
||||
limitations if you are experience frame drops and stuttering. If you experience issues like
|
||||
crashing on launch, blank screen or strange artifacts disable hardware decoding. HEVC and low
|
||||
latency networking have never caused me issues. Here are my settings:
|
||||
|
||||

|
||||
|
||||
And to show what a properly configured client can do, here's Crab Champions streamed to my
|
||||
laptop at 2k, 99fps. Note the streaming latency is ~10ms and the dark blue (encode time), light
|
||||
blue (network transmit time), and red (decode time) lines are extremely close together.
|
||||
|
||||

|
||||
|
||||
My Steam Deck performs about the same but with a lower (~7ms) streaming latency. This is
|
||||
expected because the steam deck streams at 1280x720 which means faster encode and transmit.
|
||||
|
||||
If the red line is far above the blue lines it means your decoding (software or hardware) is
|
||||
struggling to keep up. Either it's not decoding at native resolution (likely transforming the
|
||||
decoded frame to match the display resolution) or your cpu/gpu is doing something else.
|
||||
|
||||
If the light blue line is far above the dark blue line your wifi is slow. Increase channel
|
||||
width, increase transmit power, ensure devices are connected to 5 or 6Ghz, and ensure your
|
||||
device has the latest drivers.
|
||||
|
||||
If the dark blue line is far above the x axis of the graph your host is struggling to encode
|
||||
fast enough. Likely the host's cpu/gpu is doing something else or it's an old computer
|
||||
|
||||
### FSR
|
||||
|
||||
<https://linux-gaming.kwindu.eu/index.php?title=FSR_-_FidelityFX_Super_Resolution>
|
||||
|
||||
> This sharpens the image. 4 is an example value. 0 is maximum sharpness, higher values mean less sharpening. 5 is the maximum value. The default is 2
|
||||
|
||||
```bash
|
||||
WINE_FULLSCREEN_FSR=1 WINE_FULLSCREEN_FSR_STRENGTH=2
|
||||
```
|
||||
|
||||
## Sunshine and Moonlight
|
||||
|
||||
<https://docs.lizardbyte.dev/projects/sunshine/en/latest/>
|
||||
|
||||
Sunshine is desktop streaming service that leverages hardware encoding to provide near-zero latency
|
||||
network streaming to any device that can run moonlight.
|
||||
|
||||
### Install Sunshine
|
||||
|
||||
The flatpak seems to work well. The arch package keeps breaking due to
|
||||
deps. See boost-libs and then libicuuc.so.76.
|
||||
|
||||
<https://docs.lizardbyte.dev/projects/sunshine/latest/md_docs_2getting__started.html#install-system-level>
|
||||
|
||||
```bash
|
||||
flatpak install --system flathub dev.lizardbyte.app.Sunshine
|
||||
sudo -i
|
||||
flatpak run --command=additional-install.sh dev.lizardbyte.app.Sunshine
|
||||
systemctl enable --now sunshine
|
||||
```
|
||||
|
||||
#### Ports
|
||||
|
||||
HTTPS: 47984 TCP (offset by -5 from the main port)
|
||||
HTTP: 47989 TCP (the main port)
|
||||
Web: 47990 TCP (offset by +1 from the main port)
|
||||
RTSP: 48010 TCP/UDP (offset by +21 from the main port)
|
||||
Video: 47998 UDP (offset by +9 from the main port)
|
||||
Control: 47999 UDP (offset by +10 from the main port)
|
||||
Audio: 48000 UDP (offset by +11 from the main port)
|
||||
Mic (unused): 48002 UDP (offset by +13 from the main port)
|
||||
|
||||
TCP: 47984, 47989, 48010
|
||||
UDP: 47998-48000, 48002, 48010
|
||||
|
||||
```bash
|
||||
ufw allow 47984/tcp
|
||||
ufw allow 47989/tcp
|
||||
ufw allow 48010/tcp
|
||||
ufw allow 47998-48000/udp
|
||||
ufw allow 48002/udp
|
||||
ufw allow 48010/udp
|
||||
```
|
||||
|
||||
### Install Moonlight
|
||||
|
||||
#### Flatpak
|
||||
|
||||
```bash
|
||||
flatpak install moonlight
|
||||
```
|
||||
|
||||
#### Commands
|
||||
|
||||
Ctrl + Alt + Shift + Q (Moonlight Stream)
|
||||
Quit the streaming session (leaving the game running on the host PC)
|
||||
|
||||
Ctrl + Alt + Shift + S (Moonlight Stream)
|
||||
Open performance stats overlay (not supported on Steam Link or Raspberry Pi)
|
||||
|
||||
Ctrl + Alt + Shift + M (Moonlight Stream)
|
||||
Toggle mouse mode (pointer capture or direct control)
|
||||
|
||||
Ctrl + Alt + Shift + V (Moonlight Stream)
|
||||
Type clipboard text on the host
|
||||
|
||||
### Configuration
|
||||
|
||||
Sunshine doesn't need a ton of config. For streaming to devices like the steam deck
|
||||
and my computer I would recommend adding a new application that uses the `display_scale.py`
|
||||
script to set the resolution of the host before connecting.
|
||||
|
||||
1. Copy `display_scale.py` to your host
|
||||
2. Create a new application
|
||||
3. Add command: `/home/ducoterra/display_scale.py 1920x1080 1`
|
||||
4. Optionally add undo command: `/home/ducoterra/display_scale.py 3840x2160 1`
|
||||
5. Save and connect!
|
||||
|
||||
## ProtonUp-QT
|
||||
|
||||
<https://davidotek.github.io/protonup-qt/>
|
||||
|
||||
```bash
|
||||
mv ~/Downloads/ProtonUp-Qt*.AppImage ~/Applications/ProtonUp-Qt.AppImage
|
||||
chmod +x ~/Applications/*.AppImage
|
||||
```
|
||||
|
||||
```conf
|
||||
[Desktop Entry]
|
||||
Name=ProtonUp-Qt
|
||||
Exec=/home/ducoterra/Applications/ProtonUp-Qt.AppImage
|
||||
Icon=/home/ducoterra/.icons/ProtonUp-Qt.png
|
||||
Type=Application
|
||||
```
|
||||
|
||||
## VSCode
|
||||
|
||||
For the open source version of code install `code`:
|
||||
|
||||
```bash
|
||||
pacman -S code
|
||||
```
|
||||
|
||||
## XWayland
|
||||
|
||||
Provides compatibility with X server applications (like wine)
|
||||
|
||||
```bash
|
||||
pacman -S xorg-xwayland
|
||||
```
|
||||
|
||||
## Wine
|
||||
|
||||
```bash
|
||||
pacman -S wine
|
||||
```
|
||||
|
||||
## Spotify
|
||||
|
||||
```bash
|
||||
pacman -S spotify-launcher
|
||||
```
|
||||
|
||||
## VLC
|
||||
|
||||
```bash
|
||||
pacman -S vlc
|
||||
```
|
||||
|
||||
## Remote Desktop
|
||||
|
||||
```bash
|
||||
pacman -S remmina freerdp
|
||||
```
|
||||
|
||||
## OBS
|
||||
|
||||
<https://aur.archlinux.org/packages/obs-vkcapture-git>
|
||||
|
||||
```bash
|
||||
pacman -S obs-studio qt6-wayland
|
||||
cd ~/aur
|
||||
git clone https://aur.archlinux.org/obs-vkcapture-git.git
|
||||
cd obs-vkcapture-git
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
Add "Game Capture" to your scene.
|
||||
|
||||
Start your games with `env OBS_VKCAPTURE=1 %command%`
|
||||
|
||||
## Xbox Controller
|
||||
|
||||
1. Install bluetooth packages
|
||||
|
||||
```bash
|
||||
pacman -S bluez bluez-plugins bluez-utils
|
||||
```
|
||||
|
||||
1. Edit the bluetooth conf and set the controller to bredr
|
||||
|
||||
/etc/bluetooth/main.conf
|
||||
|
||||
```conf
|
||||
ControllerMode = bredr
|
||||
```
|
||||
|
||||
1. Now reset the bluetooth service
|
||||
|
||||
```bash
|
||||
systemctl restart bluetooth
|
||||
```
|
||||
|
||||
1. Connect your controller
|
||||
1. Comment out the line in the bluetooth conf you just edited
|
||||
1. Restart the bluetooth service
|
||||
|
||||
## Mangohud
|
||||
|
||||
<https://github.com/flightlessmango/MangoHud#arch-based-distributions>
|
||||
|
||||
```bash
|
||||
sudo pacman -S mangohud lib32-mangohud
|
||||
```
|
||||
|
||||
```bash
|
||||
mkdir ~/.config/MangoHud
|
||||
cp /usr/share/doc/mangohud/MangoHud.conf.example ~/.config/MangoHud/MangoHud.conf
|
||||
```
|
||||
|
||||
Edit `~/.config/MangoHud/MangoHud.conf` and tweak as you see fit.
|
||||
|
||||
Then add `mangohud env MANGOHUD_CONFIGFILE=/home/ducoterra/.config/MangoHud/MangoHud.conf %command%` to your steam launch.
|
||||
|
||||
### Minecraft with Mangohud
|
||||
|
||||
MultiMC offers a "Wrapper" option in Settings -> Custom commands. Add `/usr/bin/mangohud --dlsym`.
|
||||
|
||||
## ffmpeg
|
||||
|
||||
<https://wiki.archlinux.org/title/FFmpeg>
|
||||
|
||||
### AUR
|
||||
|
||||
<https://aur.archlinux.org/ffmpeg-full.git>
|
||||
|
||||
### Intel 11th gen
|
||||
|
||||
```bash
|
||||
pacman -S ffmpeg libmfx intel-media-sdk
|
||||
|
||||
ffmpeg \
|
||||
-hwaccel qsv \
|
||||
-c:v hevc_qsv \
|
||||
-hwaccel_output_format qsv \
|
||||
-i input.mkv \
|
||||
-c:v hevc_qsv \
|
||||
-global_quality 25 \
|
||||
output.mp4
|
||||
```
|
||||
|
||||
### AMD 7900xtx
|
||||
|
||||
<https://wiki.archlinux.org/title/FFmpeg#VA-API>
|
||||
|
||||
```bash
|
||||
sudo pacman -S ffpmeg mesa libva-mesa-driver
|
||||
|
||||
reboot
|
||||
|
||||
ffmpeg \
|
||||
-hwaccel vaapi \
|
||||
-vaapi_device /dev/dri/renderD128 \
|
||||
-hwaccel_output_format vaapi \
|
||||
-i input.mp4 \
|
||||
-c:v hevc_vaapi \
|
||||
-rc_mode 1 \
|
||||
-qp 25 \
|
||||
output.mp4
|
||||
|
||||
ffmpeg \
|
||||
-hwaccel vaapi \
|
||||
-vaapi_device /dev/dri/renderD128 \
|
||||
-hwaccel_output_format vaapi \
|
||||
-i input.mp4 \
|
||||
-c:v h264_vaapi \
|
||||
-b:v 0 \
|
||||
-maxrate 100M \
|
||||
output.mp4
|
||||
```
|
||||
|
||||
### Taking Game Cips
|
||||
|
||||
1. Install [MPV](#mpv)
|
||||
1. Open the video clip in mpv
|
||||
1. Press `del` to always show the seek bar
|
||||
1. Click the timestamp in the bottom left to show milliseconds
|
||||
1. Use `,` and `.` to seek frame by frame to find the start frame
|
||||
1. Use the following ffmpeg command to trim clips
|
||||
|
||||
```bash
|
||||
# format start_time and end_time like `00:00:34.000` (hh:mm:ss.mmm)
|
||||
export input_file=
|
||||
export output_file=
|
||||
export start_time=
|
||||
export end_time=
|
||||
|
||||
# -r 30 == frame rate of 30
|
||||
# -vf scale scales the output
|
||||
ffmpeg \
|
||||
-ss $start_time \
|
||||
-to $end_time \
|
||||
-i $input_file \
|
||||
-c:v libx264 \
|
||||
-b:v 0 \
|
||||
-vf scale=1920:1080 \
|
||||
-r 30 \
|
||||
$output_file
|
||||
```
|
||||
|
||||
1. Then concat clips with the following
|
||||
|
||||
```bash
|
||||
# Create mylist.txt
|
||||
cat <<EOF > clips.txt
|
||||
file '/path/to/file1'
|
||||
file '/path/to/file2'
|
||||
file '/path/to/file3'
|
||||
EOF
|
||||
```
|
||||
|
||||
```bash
|
||||
ffmpeg -f concat -i mylist.txt -c copy output.mp4
|
||||
```
|
||||
|
||||
## MPV
|
||||
|
||||
<https://wiki.archlinux.org/title/Mpv>
|
||||
|
||||
MPV is an alternative to VLC with a couple key benefits:
|
||||
|
||||
1. Reverse frame-by-frame seeking
|
||||
2. millisecond timestamp
|
||||
|
||||
These are great for video clipping with ffmpeg.
|
||||
|
||||
```bash
|
||||
pacman -S mpv
|
||||
```
|
||||
|
||||
## Minecraft
|
||||
|
||||
### Launcher
|
||||
|
||||
<https://wiki.archlinux.org/title/minecraft>
|
||||
|
||||
<https://aur.archlinux.org/packages/minecraft-launcher>
|
||||
|
||||
```bash
|
||||
cd ~/aur
|
||||
git clone https://aur.archlinux.org/minecraft-launcher.git
|
||||
cd minecraft-launcher
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
### MultiMC
|
||||
|
||||
MultiMC allows you to maintain and run multiple installations of minecraft with handy
|
||||
shortcuts for installing mod loaders and many more features. It's super easy to install
|
||||
on arch.
|
||||
|
||||
<https://github.com/MultiMC/multimc-pkgbuild>
|
||||
|
||||
1. Install `jre-openjdk-headless`
|
||||
|
||||
```bash
|
||||
cd ~/aur
|
||||
git clone https://github.com/MultiMC/multimc-pkgbuild.git
|
||||
cd multimc-pkgbuild
|
||||
makepkg -si
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user