driveripper disk monitor agent deployed
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m16s
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m16s
This commit is contained in:
@@ -13,6 +13,7 @@
|
|||||||
- [Monitoring Disk Health](#monitoring-disk-health)
|
- [Monitoring Disk Health](#monitoring-disk-health)
|
||||||
- [Defragmenting and Compressing](#defragmenting-and-compressing)
|
- [Defragmenting and Compressing](#defragmenting-and-compressing)
|
||||||
- [Converting ext4 to btrfs](#converting-ext4-to-btrfs)
|
- [Converting ext4 to btrfs](#converting-ext4-to-btrfs)
|
||||||
|
- [Error Kata](#error-kata)
|
||||||
|
|
||||||
Oracle [has decent docs here](https://docs.oracle.com/en/operating-systems/oracle-linux/8/btrfs/btrfs-ResizingaBtrfsFileSystem.html)
|
Oracle [has decent docs here](https://docs.oracle.com/en/operating-systems/oracle-linux/8/btrfs/btrfs-ResizingaBtrfsFileSystem.html)
|
||||||
|
|
||||||
@@ -209,4 +210,27 @@ btrfs filesystem defragment -c zstd:20 /btrfs/pool0
|
|||||||
# Unmount and then run btrfs-convert
|
# Unmount and then run btrfs-convert
|
||||||
umount /path/to/mount
|
umount /path/to/mount
|
||||||
btrfs-convert /dev/sdX1
|
btrfs-convert /dev/sdX1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Error Kata
|
||||||
|
|
||||||
|
After unplanned shutdowns, power loss, physically moving the server, or whenever
|
||||||
|
you feel like it, it's generally a good idea to scrub your btrfs pools for
|
||||||
|
errors and check your device stats.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export POOL=pool1
|
||||||
|
# Find any device stat with a number > 0
|
||||||
|
# This means there's an error
|
||||||
|
btrfs device stats /btrfs/${POOL} | grep -vE ' 0$'
|
||||||
|
|
||||||
|
# If you find an error, grab the smartctl details to check for disk error
|
||||||
|
# Scrub the btrfs pool to check for other hidden errors
|
||||||
|
lsblk -fs
|
||||||
|
smartctl -a /dev/<dev>
|
||||||
|
btrfs scrub start -Bd /btrfs/pool0
|
||||||
|
btrfs scrub status /btrfs/${POOL}
|
||||||
|
|
||||||
|
# Check the filesystem status after a scrub
|
||||||
|
btrfs filesystem show /btrfs/${POOL}
|
||||||
|
```
|
||||||
|
|||||||
@@ -129,6 +129,10 @@ sudo semodule -l
|
|||||||
|
|
||||||
# Remove an active policy
|
# Remove an active policy
|
||||||
sudo semodule -r clamav-notifysend
|
sudo semodule -r clamav-notifysend
|
||||||
|
|
||||||
|
# Set a file type to allow systemd execute
|
||||||
|
semanage fcontext -a -t bin_t /root/.local/share
|
||||||
|
chcon -Rv -u system_u -t bin_t /root/.local/share
|
||||||
```
|
```
|
||||||
|
|
||||||
### Showing Dontaudit Rules
|
### Showing Dontaudit Rules
|
||||||
|
|||||||
@@ -215,7 +215,7 @@ def load_ses_creds() -> AWS_SES_DOTENV:
|
|||||||
raw_values = dotenv_values(ses_dotenv_location)
|
raw_values = dotenv_values(ses_dotenv_location)
|
||||||
if raw_values:
|
if raw_values:
|
||||||
aws_ses_config = cast(AWS_SES_DOTENV, raw_values)
|
aws_ses_config = cast(AWS_SES_DOTENV, raw_values)
|
||||||
print(f"AWS SES Credentials loaded: {aws_ses_config}")
|
# print(f"AWS SES Credentials loaded: {aws_ses_config}")
|
||||||
return aws_ses_config
|
return aws_ses_config
|
||||||
print("No email credentials supplied. Exiting.")
|
print("No email credentials supplied. Exiting.")
|
||||||
exit(1)
|
exit(1)
|
||||||
@@ -302,9 +302,9 @@ def run_conversation(user_message: str, max_tool_calls=10):
|
|||||||
print(f"Attempting to call {tool_name} with arguments {arguments}...")
|
print(f"Attempting to call {tool_name} with arguments {arguments}...")
|
||||||
|
|
||||||
# Give the user a chance to stop a problem before it starts
|
# Give the user a chance to stop a problem before it starts
|
||||||
keep_going = input("Continue? (Y/n) ")
|
# keep_going = input("Continue? (Y/n) ")
|
||||||
if keep_going.lower() == "n":
|
# if keep_going.lower() == "n":
|
||||||
exit(1)
|
# exit(1)
|
||||||
|
|
||||||
result = execute_tool(tool_name, arguments)
|
result = execute_tool(tool_name, arguments)
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user