<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://docs.delftsolutions.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tannaz</id>
	<title>Delft Solutions - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.delftsolutions.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tannaz"/>
	<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/wiki/Special:Contributions/Tannaz"/>
	<updated>2026-04-03T22:10:02Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.3</generator>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Install_a_new_Disk_in_Server&amp;diff=459</id>
		<title>Install a new Disk in Server</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Install_a_new_Disk_in_Server&amp;diff=459"/>
		<updated>2024-09-16T09:04:28Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to Install a New Disk in a Server ==&lt;br /&gt;
To insert the disk into the server used for Ceph and configure it for use, including making it a part of the Ceph cluster with Proxmox, follow these steps:&lt;br /&gt;
&lt;br /&gt;
=== Pick a slot number where you want to put the disk ===&lt;br /&gt;
* You can find the empty slot numbers or replace a broken disk. &lt;br /&gt;
* Check the drive bay number (that is written in front side of the server) to be matched to the slot number in iDRAC&lt;br /&gt;
=== Physically Install the disk ===&lt;br /&gt;
* Assemble the caddy and insert the disk.&lt;br /&gt;
* For hot-swappable drives (including NVMe and non-NVMe), no need to power down the server. You can insert the disk.&lt;br /&gt;
* For non-hot-swappable drives, check the server manual to determine if powering off the server is necessary.&lt;br /&gt;
* Follow the server’s manual and insert the disk into the correct drive bay, matching the slot number in iDRAC.&lt;br /&gt;
=== Create OSD for the new disk ===&lt;br /&gt;
* Depends on the server configuration, when you insert the disk in the server, it could automatically be available in Disk Overview on Proxmox and labelled Unused&lt;br /&gt;
* Create the OSD with encryption enabled and correct device class (HDD, SSD, NVMe)&lt;br /&gt;
* If the OSD automatically didn’t create, you can create an encrypted OSD manually using ```pveceph  osd create /dev/&amp;lt;dev&amp;gt; --crush-device-class &amp;lt;dev-class&amp;gt; -–encrypted 1```  command or you can create the OSD in Proxmox web GUI&lt;br /&gt;
* Verify it with using ```ceph osd tree``` to integrate the disk into your Ceph cluster&lt;br /&gt;
=== Verify OSD in up | in state in Ceph ===&lt;br /&gt;
* Checking Ceph&#039;s Health by running ceph -s to check the health of the Ceph cluster&lt;br /&gt;
* Verify the new OSD is up and running using ceph osd status and check for any errors with ceph health detail&lt;br /&gt;
=== Make sure Ceph is healthy and is using the new OSD ===&lt;br /&gt;
* Ceph should be healthy and shows you a green state without any remapped PG&#039;s, or degraded data redundancy&lt;br /&gt;
&lt;br /&gt;
Resourse: https://pve.proxmox.com/pve-docs/pveceph.1.html&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Install_a_new_Disk_in_Server&amp;diff=458</id>
		<title>Install a new Disk in Server</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Install_a_new_Disk_in_Server&amp;diff=458"/>
		<updated>2024-09-16T09:02:49Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to Install a New Disk in a Server ==&lt;br /&gt;
To insert the disk into the server used for Ceph and configure it for use, including making it a part of the Ceph cluster with Proxmox, follow these steps:&lt;br /&gt;
&lt;br /&gt;
=== Pick a slot number where you want to put the disk ===&lt;br /&gt;
* You can find the empty slot numbers or replace a broken disk. &lt;br /&gt;
* Check the drive bay number (that is written in front side of the server) to be matched to the slot number in iDRAC&lt;br /&gt;
=== Physically Install the disk ===&lt;br /&gt;
* Assemble the caddy and insert the disk.&lt;br /&gt;
* For hot-swappable drives (including NVMe and non-NVMe), no need to power down the server. You can insert the disk.&lt;br /&gt;
* For non-hot-swappable drives, check the server manual to determine if powering off the server is necessary.&lt;br /&gt;
* Follow the server’s manual and insert the disk into the correct drive bay, matching the slot number in iDRAC.&lt;br /&gt;
=== Create OSD for the new disk ===&lt;br /&gt;
* Depends on the server configuration, when you insert the disk in the server, it could automatically be available in Disk Overview on Proxmox and labelled Unused&lt;br /&gt;
* Create the OSD with encryption enabled and correct device class (HDD, SSD, NVMe)&lt;br /&gt;
* If the OSD automatically didn’t create, you can create an encrypted OSD manually using ```pveceph  osd create /dev/&amp;lt;dev&amp;gt; --crush-device-class &amp;lt;dev-class&amp;gt; -–encrypted 1```  command or you can create the OSD in Proxmox web GUI&lt;br /&gt;
* Verify it with using ```ceph osd tree``` to integrate the disk into your Ceph cluster&lt;br /&gt;
=== Verify OSD in up | in state in Ceph ===&lt;br /&gt;
* Checking Ceph&#039;s Health by running ceph -s to check the health of the Ceph cluster&lt;br /&gt;
* Verify the new OSD is up and running using ceph osd status and check for any errors with ceph health detail&lt;br /&gt;
=== Make sure Ceph is healthy and is using the new OSD ===&lt;br /&gt;
* Ceph should be healthy and shows you a green state without any remapped PG&#039;s, or degraded data redundancy&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Install_a_new_Disk_in_Server&amp;diff=457</id>
		<title>Install a new Disk in Server</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Install_a_new_Disk_in_Server&amp;diff=457"/>
		<updated>2024-09-13T15:52:22Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Created page with &amp;quot;== How to Install a New Disk in a Server == To insert the disk into the server used for Ceph and configure it for use, including making it a part of the Ceph cluster with Proxmox, follow these steps:  Pick a slot number where you want to put the disk You can find the empty slot numbers or replace a broken disk. Check the drive bay number (that is written in front side of the server) to be matched to the slot number in iDRAC Physically Install the disk:  Assemble the cadd...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to Install a New Disk in a Server ==&lt;br /&gt;
To insert the disk into the server used for Ceph and configure it for use, including making it a part of the Ceph cluster with Proxmox, follow these steps:&lt;br /&gt;
&lt;br /&gt;
Pick a slot number where you want to put the disk&lt;br /&gt;
You can find the empty slot numbers or replace a broken disk. Check the drive bay number (that is written in front side of the server) to be matched to the slot number in iDRAC&lt;br /&gt;
Physically Install the disk: &lt;br /&gt;
Assemble the caddy and insert the disk.&lt;br /&gt;
For hot-swappable drives (including NVMe and non-NVMe), no need to power down the server. You can insert the disk.&lt;br /&gt;
For non-hot-swappable drives, check the server manual to determine if powering off the server is necessary.&lt;br /&gt;
Follow the server’s manual and insert the disk into the correct drive bay, matching the slot number in iDRAC.&lt;br /&gt;
Create OSD for the new disk&lt;br /&gt;
Depends on the server configuration, when you insert the disk in the server, it could automatically be available in Disk Overview on Proxmox and labelled Unused&lt;br /&gt;
Create the OSD with encryption enabled and correct device class (HDD, SSD, NVMe)&lt;br /&gt;
If the OSD automatically didn’t create, you can create an encrypted OSD manually using  pveceph  osd create /dev/&amp;lt;dev&amp;gt; --crush-device-class &amp;lt;dev-class&amp;gt; -–encrypted 1  command or you can create the OSD in Proxmox web GUI&lt;br /&gt;
Verify it with using ceph osd tree to integrate the disk into your Ceph cluster&lt;br /&gt;
Verify OSD in up | in state in Ceph&lt;br /&gt;
Checking Ceph&#039;s Health by running ceph -s to check the health of the Ceph cluster&lt;br /&gt;
Verify the new OSD is up and running using ceph osd status and check for any errors with ceph health detail&lt;br /&gt;
Make sure Ceph is healthy and is using the new OSD&lt;br /&gt;
Ceph should be healthy and shows you a green state without any remapped PG&#039;s, or degraded data redundancy&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=456</id>
		<title>Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=456"/>
		<updated>2024-09-13T15:50:08Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Finance ==&lt;br /&gt;
&lt;br /&gt;
=== Exact ===&lt;br /&gt;
&lt;br /&gt;
* [[booking bonus|Booking bonus]]&lt;br /&gt;
* [[booking wages|Booking wages]]&lt;br /&gt;
* [[booking quarterly hosting invoice|Booking quarterly hosting invoice]]&lt;br /&gt;
* [[new receipt|Enter a new receipt]]&lt;br /&gt;
* [[reconciliation|Reconciliation of transaction]]&lt;br /&gt;
* [[invoicing|Send an invoice]]&lt;br /&gt;
* [[payment reminders|Send payment reminder]]&lt;br /&gt;
* [[invoice approval|Process for approving invoices (/filed receipts)]]&lt;br /&gt;
&lt;br /&gt;
=== Bunq ===&lt;br /&gt;
&lt;br /&gt;
* [[top up account|Top up expense account]]&lt;br /&gt;
&lt;br /&gt;
== Work Process ==&lt;br /&gt;
&lt;br /&gt;
* [[Definition of done|Definition of Done]]&lt;br /&gt;
* [[Incident Handling|Incident Handling]]&lt;br /&gt;
* [[SRE Maintenance|SRE Maintenance]]&lt;br /&gt;
&lt;br /&gt;
== Internal Process ==&lt;br /&gt;
* [[timetracking|Timetracking process]]&lt;br /&gt;
* [[Starting work for a new client]]&lt;br /&gt;
* [[12 percent|12% time]]&lt;br /&gt;
* [[Annual leave|Annual leave]]&lt;br /&gt;
* [[Bonus allocation|Bonus allocation]]&lt;br /&gt;
* [[Calamity leave|Calamity leave]]&lt;br /&gt;
* [[Overtime|Overtime]]&lt;br /&gt;
* [[Retrospectives|Retrospectives]]&lt;br /&gt;
* [[Sick leave|Sick leave]]&lt;br /&gt;
* [[Training and self-study|Training and Self-Study]]&lt;br /&gt;
* [[Daily|Daily]]&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
* Era Inventory [[project_era_inventory_api|API Description]]&lt;br /&gt;
&lt;br /&gt;
== SRE ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;To be further populated with guide from drive&#039;&#039;&lt;br /&gt;
* [[create gitlab runner host|Create a GitLab runner host]]&lt;br /&gt;
* [[vm setup|Create a (Debian) VM]]&lt;br /&gt;
* [[border update|Process for updating a border]]&lt;br /&gt;
* [[border reboot|Reboot border without downtime]]&lt;br /&gt;
* [[WS Proxmox node reboot|Reboot WS Proxmox node without downtime]]&lt;br /&gt;
* [[Resize VM Disk]]&lt;br /&gt;
* [[SRE tools]]&lt;br /&gt;
* [[Enroll Mac in Kerberos]]&lt;br /&gt;
* [[Creating a VM on Hetzner]]&lt;br /&gt;
* [[Rebooting VM]]&lt;br /&gt;
* [[Rebooting Offsite]]&lt;br /&gt;
* [[ssh-fingerprints|Verifying SSH fingerprints]]&lt;br /&gt;
* [[Removing VM]]&lt;br /&gt;
* [[Install a new Disk in Server]]&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* [[stack|Greenfield stack]]&lt;br /&gt;
* [[standard tools|Standard Tools]]&lt;br /&gt;
* [[list of unfurl debuggers|List of unfurl debuggers]]&lt;br /&gt;
* [[Recommended suppliers]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=454</id>
		<title>Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=454"/>
		<updated>2024-09-03T12:10:39Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Finance ==&lt;br /&gt;
&lt;br /&gt;
=== Exact ===&lt;br /&gt;
&lt;br /&gt;
* [[booking bonus|Booking bonus]]&lt;br /&gt;
* [[booking wages|Booking wages]]&lt;br /&gt;
* [[booking quarterly hosting invoice|Booking quarterly hosting invoice]]&lt;br /&gt;
* [[new receipt|Enter a new receipt]]&lt;br /&gt;
* [[reconciliation|Reconciliation of transaction]]&lt;br /&gt;
* [[invoicing|Send an invoice]]&lt;br /&gt;
* [[payment reminders|Send payment reminder]]&lt;br /&gt;
* [[invoice approval|Process for approving invoices (/filed receipts)]]&lt;br /&gt;
&lt;br /&gt;
=== Bunq ===&lt;br /&gt;
&lt;br /&gt;
* [[top up account|Top up expense account]]&lt;br /&gt;
&lt;br /&gt;
== Work Process ==&lt;br /&gt;
&lt;br /&gt;
* [[Definition of done|Definition of Done]]&lt;br /&gt;
* [[Incident Handling|Incident Handling]]&lt;br /&gt;
* [[SRE Maintenance|SRE Maintenance]]&lt;br /&gt;
&lt;br /&gt;
== Internal Process ==&lt;br /&gt;
* [[timetracking|Timetracking process]]&lt;br /&gt;
* [[Starting work for a new client]]&lt;br /&gt;
* [[12 percent|12% time]]&lt;br /&gt;
* [[Annual leave|Annual leave]]&lt;br /&gt;
* [[Bonus allocation|Bonus allocation]]&lt;br /&gt;
* [[Calamity leave|Calamity leave]]&lt;br /&gt;
* [[Overtime|Overtime]]&lt;br /&gt;
* [[Retrospectives|Retrospectives]]&lt;br /&gt;
* [[Sick leave|Sick leave]]&lt;br /&gt;
* [[Training and self-study|Training and Self-Study]]&lt;br /&gt;
* [[Daily|Daily]]&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
* Era Inventory [[project_era_inventory_api|API Description]]&lt;br /&gt;
&lt;br /&gt;
== SRE ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;To be further populated with guide from drive&#039;&#039;&lt;br /&gt;
* [[create gitlab runner host|Create a GitLab runner host]]&lt;br /&gt;
* [[vm setup|Create a (Debian) VM]]&lt;br /&gt;
* [[border update|Process for updating a border]]&lt;br /&gt;
* [[border reboot|Reboot border without downtime]]&lt;br /&gt;
* [[WS Proxmox node reboot|Reboot WS Proxmox node without downtime]]&lt;br /&gt;
* [[Resize VM Disk]]&lt;br /&gt;
* [[SRE tools]]&lt;br /&gt;
* [[Enroll Mac in Kerberos]]&lt;br /&gt;
* [[Creating a VM on Hetzner]]&lt;br /&gt;
* [[Rebooting VM]]&lt;br /&gt;
* [[Rebooting Offsite]]&lt;br /&gt;
* [[ssh-fingerprints|Verifying SSH fingerprints]]&lt;br /&gt;
* [[Removing VM]]&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* [[stack|Greenfield stack]]&lt;br /&gt;
* [[standard tools|Standard Tools]]&lt;br /&gt;
* [[list of unfurl debuggers|List of unfurl debuggers]]&lt;br /&gt;
* [[Recommended suppliers]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Replace/Install_a_new_SSD_in_Server&amp;diff=453</id>
		<title>Replace/Install a new SSD in Server</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Replace/Install_a_new_SSD_in_Server&amp;diff=453"/>
		<updated>2024-09-03T09:59:58Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use Ceph so RAID is unnecessary for our installment based on these reasons:&lt;br /&gt;
&lt;br /&gt;
* Built-in Redundancy: Ceph provides data replication and redundancy across multiple OSDs (Object Storage Daemons) in the cluster. It automatically replicates data to ensure availability, so RAID&#039;s redundancy is redundant.&lt;br /&gt;
* &lt;br /&gt;
* Performance Impact: RAID can introduce additional latency and reduce the performance of SSDs. Ceph is designed to work efficiently with raw disks, and adding RAID can slow down operations.&lt;br /&gt;
* &lt;br /&gt;
* Wasted Resources: Using RAID means you&#039;re dedicating some of your disk capacity to redundancy (like RAID 1 mirroring). Ceph already replicates data across multiple disks or nodes, so this would lead to unnecessary resource usage.&lt;br /&gt;
* &lt;br /&gt;
* Complexity: RAID adds another layer of complexity that isn&#039;t needed. Managing disks individually with Ceph simplifies management and reduces potential points of failure.&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
== How to Install a New SSD ==&lt;br /&gt;
=== Physically Install the SSD ===&lt;br /&gt;
Create a maintenance mode in Zabbix for the server&lt;br /&gt;
Check the slot number of the empty drive bays in iDRAK (or you can check if the SSD is blinking and if it is active or not)&lt;br /&gt;
Assemble the caddy to the SSD&lt;br /&gt;
Pull out the empty drive bay&lt;br /&gt;
Physically Install the SSD&lt;br /&gt;
&lt;br /&gt;
=== Verify the SSD is Recognized by the System ===&lt;br /&gt;
Login in to Proxmox and run `lsblk` to list all the SSDs and verify that SSD is recognized&lt;br /&gt;
&lt;br /&gt;
=== Prepare the SSD for Ceph ===&lt;br /&gt;
Install necessary packages &lt;br /&gt;
```apt-get update&lt;br /&gt;
apt-get install ceph ceph-osd```&lt;br /&gt;
Create and Add the OSD: `pveceph createosd /dev/sdX`&lt;br /&gt;
Use command `ceph osd tree` to make sure the new OSD is added&lt;br /&gt;
Check the Ceph status: `ceph -s`&lt;br /&gt;
&lt;br /&gt;
=== Change the Raid configuration ===&lt;br /&gt;
&lt;br /&gt;
Most people change the raid config from within the OS, then you don&#039;t need downtime. We don&#039;t have the configuration utility installed so we generally either use the iDRAC or the boot utility&lt;br /&gt;
We usually need configure the RAID through the iDRAC or the boot utility.&lt;br /&gt;
&lt;br /&gt;
=== This is what we did before ===&lt;br /&gt;
Before reboting the server in Proxmox, you need to create OSD and check &#039;ceph pg dump | grep remapped&#039; to make sure there is no issue (no remapped PG) and we have multiple copies in Scorpion and we can proceed to rebooting the server&lt;br /&gt;
Reboot the server to put the SSD in Non-Raid (we did it but I am not sure it is a good way)&lt;br /&gt;
After rebooting we expect the SSD is detected, we can check in iDRAK manager, if the SSD is blinking&lt;br /&gt;
Go to the lifecycle controller and boot the server, we expect to see the SSD in raid controller&lt;br /&gt;
Check if the SSD shows up in the system diagnostics and make sure the SSD is being recognized by the server&lt;br /&gt;
Make sure ceph is healthy&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Replace/Install_a_new_SSD_in_Server&amp;diff=452</id>
		<title>Replace/Install a new SSD in Server</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Replace/Install_a_new_SSD_in_Server&amp;diff=452"/>
		<updated>2024-09-02T16:13:00Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use Ceph so RAID is unnecessary for our installment based on these reasons:&lt;br /&gt;
&lt;br /&gt;
* Built-in Redundancy: Ceph provides data replication and redundancy across multiple OSDs (Object Storage Daemons) in the cluster. It automatically replicates data to ensure availability, so RAID&#039;s redundancy is redundant.&lt;br /&gt;
* &lt;br /&gt;
* Performance Impact: RAID can introduce additional latency and reduce the performance of SSDs. Ceph is designed to work efficiently with raw disks, and adding RAID can slow down operations.&lt;br /&gt;
* &lt;br /&gt;
* Wasted Resources: Using RAID means you&#039;re dedicating some of your disk capacity to redundancy (like RAID 1 mirroring). Ceph already replicates data across multiple disks or nodes, so this would lead to unnecessary resource usage.&lt;br /&gt;
* &lt;br /&gt;
* Complexity: RAID adds another layer of complexity that isn&#039;t needed. Managing disks individually with Ceph simplifies management and reduces potential points of failure.&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
== How to Install a New SSD ==&lt;br /&gt;
=== Physically Install the SSD ===&lt;br /&gt;
Create a maintenance mode in Zabbix for the server&lt;br /&gt;
Check the slot number of the empty drive bays in iDRAK (or you can check if the SSD is blinking and if it is active or not)&lt;br /&gt;
Assemble the caddy to the SSD&lt;br /&gt;
Pull out the empty drive bay&lt;br /&gt;
Physically Install the SSD&lt;br /&gt;
&lt;br /&gt;
=== Verify the SSD is Recognized by the System ===&lt;br /&gt;
Login in to Proxmox and run `lsblk` to list all the SSDs and verify that SSD is recognized&lt;br /&gt;
&lt;br /&gt;
=== Prepare the SSD for Ceph ===&lt;br /&gt;
Install necessary packages &lt;br /&gt;
```apt-get update&lt;br /&gt;
apt-get install ceph ceph-osd```&lt;br /&gt;
Create and Add the OSD: `pveceph createosd /dev/sdX`&lt;br /&gt;
Use command `ceph osd tree` to make sure the new OSD is added&lt;br /&gt;
Check the Ceph status: `ceph -s`&lt;br /&gt;
&lt;br /&gt;
Before reboting the server in Proxmox, you need to create OSD and check &#039;ceph pg dump | grep remapped&#039; to make sure there is no issue (no remapped PG) and we have multiple copies in Scorpion and we can proceed to rebooting the server&lt;br /&gt;
Reboot the server to put the SSD in Non-Raid (we did it but I am not sure it is a good way)&lt;br /&gt;
After rebooting we expect the SSD is detected, we can check in iDRAK manager, if the SSD is blinking&lt;br /&gt;
Go to the lifecycle controller and boot the server, we expect to see the SSD in raid controller&lt;br /&gt;
Check if the SSD shows up in the system diagnostics and make sure the SSD is being recognized by the server&lt;br /&gt;
Make sure ceph is healthy&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Replace/Install_a_new_SSD_in_Server&amp;diff=451</id>
		<title>Replace/Install a new SSD in Server</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Replace/Install_a_new_SSD_in_Server&amp;diff=451"/>
		<updated>2024-09-02T16:12:48Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use Ceph so RAID is unnecessary for our installment based on these reasons:&lt;br /&gt;
&lt;br /&gt;
* Built-in Redundancy: Ceph provides data replication and redundancy across multiple OSDs (Object Storage Daemons) in the cluster. It automatically replicates data to ensure availability, so RAID&#039;s redundancy is redundant.&lt;br /&gt;
* &lt;br /&gt;
* Performance Impact: RAID can introduce additional latency and reduce the performance of SSDs. Ceph is designed to work efficiently with raw disks, and adding RAID can slow down operations.&lt;br /&gt;
* &lt;br /&gt;
* Wasted Resources: Using RAID means you&#039;re dedicating some of your disk capacity to redundancy (like RAID 1 mirroring). Ceph already replicates data across multiple disks or nodes, so this would lead to unnecessary resource usage.&lt;br /&gt;
* &lt;br /&gt;
* Complexity: RAID adds another layer of complexity that isn&#039;t needed. Managing disks individually with Ceph simplifies management and reduces potential points of failure.&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
== How to Install a New SSD ==&lt;br /&gt;
=== Physically Install the SSD ===&lt;br /&gt;
Create a maintenance mode in Zabbix for the server&lt;br /&gt;
Check the slot number of the empty drive bays in iDRAK (or you can check if the SSD is blinking and if it is active or not)&lt;br /&gt;
Assemble the caddy to the SSD&lt;br /&gt;
Pull out the empty drive bay&lt;br /&gt;
Physically Install the SSD&lt;br /&gt;
&lt;br /&gt;
=== Verify the SSD is Recognized by the System ===&lt;br /&gt;
Login in to Proxmox and run `lsblk` to list all the SSDs and verify that SSD is recognized&lt;br /&gt;
&lt;br /&gt;
==== Prepare the SSD for Ceph ===&lt;br /&gt;
Install necessary packages &lt;br /&gt;
```apt-get update&lt;br /&gt;
apt-get install ceph ceph-osd```&lt;br /&gt;
Create and Add the OSD: `pveceph createosd /dev/sdX`&lt;br /&gt;
Use command `ceph osd tree` to make sure the new OSD is added&lt;br /&gt;
Check the Ceph status: `ceph -s`&lt;br /&gt;
&lt;br /&gt;
Before reboting the server in Proxmox, you need to create OSD and check &#039;ceph pg dump | grep remapped&#039; to make sure there is no issue (no remapped PG) and we have multiple copies in Scorpion and we can proceed to rebooting the server&lt;br /&gt;
Reboot the server to put the SSD in Non-Raid (we did it but I am not sure it is a good way)&lt;br /&gt;
After rebooting we expect the SSD is detected, we can check in iDRAK manager, if the SSD is blinking&lt;br /&gt;
Go to the lifecycle controller and boot the server, we expect to see the SSD in raid controller&lt;br /&gt;
Check if the SSD shows up in the system diagnostics and make sure the SSD is being recognized by the server&lt;br /&gt;
Make sure ceph is healthy&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Replace/Install_a_new_SSD_in_Server&amp;diff=450</id>
		<title>Replace/Install a new SSD in Server</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Replace/Install_a_new_SSD_in_Server&amp;diff=450"/>
		<updated>2024-09-02T15:14:30Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Created page with &amp;quot;We use Ceph so RAID is unnecessary for our installment.  Ceph provides data replication and redundancy across multiple OSDs (Object Storage Daemons) in the cluster. It automatically replicates data to ensure availability, so RAID&amp;#039;s redundancy is redundant.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use Ceph so RAID is unnecessary for our installment. &lt;br /&gt;
Ceph provides data replication and redundancy across multiple OSDs (Object Storage Daemons) in the cluster. It automatically replicates data to ensure availability, so RAID&#039;s redundancy is redundant.&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=449</id>
		<title>Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=449"/>
		<updated>2024-09-02T15:08:22Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Finance ==&lt;br /&gt;
&lt;br /&gt;
=== Exact ===&lt;br /&gt;
&lt;br /&gt;
* [[booking bonus|Booking bonus]]&lt;br /&gt;
* [[booking wages|Booking wages]]&lt;br /&gt;
* [[booking quarterly hosting invoice|Booking quarterly hosting invoice]]&lt;br /&gt;
* [[new receipt|Enter a new receipt]]&lt;br /&gt;
* [[reconciliation|Reconciliation of transaction]]&lt;br /&gt;
* [[invoicing|Send an invoice]]&lt;br /&gt;
* [[payment reminders|Send payment reminder]]&lt;br /&gt;
* [[invoice approval|Process for approving invoices (/filed receipts)]]&lt;br /&gt;
&lt;br /&gt;
=== Bunq ===&lt;br /&gt;
&lt;br /&gt;
* [[top up account|Top up expense account]]&lt;br /&gt;
&lt;br /&gt;
== Work Process ==&lt;br /&gt;
&lt;br /&gt;
* [[Definition of done|Definition of Done]]&lt;br /&gt;
* [[Incident Handling|Incident Handling]]&lt;br /&gt;
* [[SRE Maintenance|SRE Maintenance]]&lt;br /&gt;
&lt;br /&gt;
== Internal Process ==&lt;br /&gt;
* [[timetracking|Timetracking process]]&lt;br /&gt;
* [[Starting work for a new client]]&lt;br /&gt;
* [[12 percent|12% time]]&lt;br /&gt;
* [[Annual leave|Annual leave]]&lt;br /&gt;
* [[Bonus allocation|Bonus allocation]]&lt;br /&gt;
* [[Calamity leave|Calamity leave]]&lt;br /&gt;
* [[Overtime|Overtime]]&lt;br /&gt;
* [[Retrospectives|Retrospectives]]&lt;br /&gt;
* [[Sick leave|Sick leave]]&lt;br /&gt;
* [[Training and self-study|Training and Self-Study]]&lt;br /&gt;
* [[Daily|Daily]]&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
* Era Inventory [[project_era_inventory_api|API Description]]&lt;br /&gt;
&lt;br /&gt;
== SRE ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;To be further populated with guide from drive&#039;&#039;&lt;br /&gt;
* [[create gitlab runner host|Create a GitLab runner host]]&lt;br /&gt;
* [[vm setup|Create a (Debian) VM]]&lt;br /&gt;
* [[border update|Process for updating a border]]&lt;br /&gt;
* [[border reboot|Reboot border without downtime]]&lt;br /&gt;
* [[WS Proxmox node reboot|Reboot WS Proxmox node without downtime]]&lt;br /&gt;
* [[Resize VM Disk]]&lt;br /&gt;
* [[SRE tools]]&lt;br /&gt;
* [[Enroll Mac in Kerberos]]&lt;br /&gt;
* [[Creating a VM on Hetzner]]&lt;br /&gt;
* [[Rebooting VM]]&lt;br /&gt;
* [[Rebooting Offsite]]&lt;br /&gt;
* [[ssh-fingerprints|Verifying SSH fingerprints]]&lt;br /&gt;
* [[Removing VM]]&lt;br /&gt;
* [[Replace/Install a new SSD in Server]]&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* [[stack|Greenfield stack]]&lt;br /&gt;
* [[standard tools|Standard Tools]]&lt;br /&gt;
* [[list of unfurl debuggers|List of unfurl debuggers]]&lt;br /&gt;
* [[Recommended suppliers]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=WS_Proxmox_node_reboot&amp;diff=417</id>
		<title>WS Proxmox node reboot</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=WS_Proxmox_node_reboot&amp;diff=417"/>
		<updated>2024-08-12T13:35:19Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Tips &amp;amp; Notes ==&lt;br /&gt;
* If you&#039;re expecting to reboot every node in the cluster, do the node with the containers last, to limit the amount of downtime and reboots for them&lt;br /&gt;
* Updating a node: `apt update` and `apt full-upgrade`&lt;br /&gt;
* Make sure all VMs are actually migratable before adding to a HA group&lt;br /&gt;
* If there are containers on the device you are looking to reboot- you are going to need to also create a maintenance mode to cover them (for example teamspeak or awstats)&lt;br /&gt;
* Containers will inherit the OS of their host, so you will also need to handle triggers related to their OS updating, where appropriate&lt;br /&gt;
== Pre-Work ==&lt;br /&gt;
* If a VM or container is going to incur downtime, you must let the affected parties know in advance. Ideally they should be informed the previous day.  &lt;br /&gt;
&lt;br /&gt;
== Pre flight checks ==&lt;br /&gt;
* Check all Ceph pools are running on at least 3/2 replication&lt;br /&gt;
* Check that all running VM&#039;s on the node you want to reboot are in HA (if not, add them or migrate them away manually)&lt;br /&gt;
* Check that Ceph is healthy -&amp;gt; No remapped PG&#039;s, or degraded data redundancy&lt;br /&gt;
* You have communicated that downtime is expected to the users who will be affected (Ideally one day in advance)&lt;br /&gt;
&lt;br /&gt;
== Update Process ==&lt;br /&gt;
* Update the node: `apt update` and `apt full-upgrade`&lt;br /&gt;
* Check the packages that are removed/updated/installed correctly and they are the sane (to make sense)&lt;br /&gt;
&lt;br /&gt;
== Reboot process ==&lt;br /&gt;
* Start maintenance mode for the Proxmox node and any containers running on the node&lt;br /&gt;
* Start maintenance mode for Ceph, specify that we only want to surpress the trigger for health state being in warning by setting tag `ceph_health` equals `warning`&lt;br /&gt;
* Let affected parties know that the mainenance period you told them about in the preflight checks is about to take place.&lt;br /&gt;
[[File:Ceph-maintenance.png|thumb]]&lt;br /&gt;
* Set noout flag on host: `ceph osd set-group noout &amp;lt;node&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
# gain ssh access to host&lt;br /&gt;
# Log in through IPA&lt;br /&gt;
# Run the command&lt;br /&gt;
* &#039;&#039;&#039;Reboot&#039;&#039;&#039; node through web GUI&lt;br /&gt;
* Wait for node to come back up&lt;br /&gt;
* Wait for OSD&#039;s to be back online&lt;br /&gt;
* Remove noout flag on host: `ceph osd unset-group noout &amp;lt;node&amp;gt;` ,to do this:&lt;br /&gt;
* If a kernel update was done, manually execute the `Operating system` item manually to detect the update. Manually executing the two items that indicate a reboot is also usefull if they were firing, to stop them/check no further reboots are needed.&lt;br /&gt;
* Ackowledge &amp;amp; close triggers&lt;br /&gt;
* Remove maintenance modes&lt;br /&gt;
&lt;br /&gt;
== Aftercare ==&lt;br /&gt;
* Ensure that Kaboom API is running on Screwdriver or Paloma. This is to get the best performance for the VM.&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=WS_Proxmox_node_reboot&amp;diff=416</id>
		<title>WS Proxmox node reboot</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=WS_Proxmox_node_reboot&amp;diff=416"/>
		<updated>2024-08-12T13:28:10Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Tips &amp;amp; Notes ==&lt;br /&gt;
* If you&#039;re expecting to reboot every node in the cluster, do the node with the containers last, to limit the amount of downtime and reboots for them&lt;br /&gt;
* Updating a node: `apt update` and `apt full-upgrade`&lt;br /&gt;
* Make sure all VMs are actually migratable before adding to a HA group&lt;br /&gt;
* If there are containers on the device you are looking to reboot- you are going to need to also create a maintenance mode to cover them (for example teamspeak or awstats)&lt;br /&gt;
* Containers will inherit the OS of their host, so you will also need to handle triggers related to their OS updating, where appropriate&lt;br /&gt;
== Pre-Work ==&lt;br /&gt;
* If a VM or container is going to incur downtime, you must let the affected parties know in advance. Ideally they should be informed the previous day.  &lt;br /&gt;
&lt;br /&gt;
== Pre flight checks ==&lt;br /&gt;
* Check all Ceph pools are running on at least 3/2 replication&lt;br /&gt;
* Check that all running VM&#039;s on the node you want to reboot are in HA (if not, add them or migrate them away manually)&lt;br /&gt;
* Check that Ceph is healthy -&amp;gt; No remapped PG&#039;s, or degraded data redundancy&lt;br /&gt;
* You have communicated that downtime is expected to the users who will be affected (Ideally one day in advance)&lt;br /&gt;
&lt;br /&gt;
== Update Process ==&lt;br /&gt;
* Update the node: `apt update` and `apt full-upgrade`&lt;br /&gt;
* Check the packages that are removed/updated/installed correctly and they are the sane (to make sense)&lt;br /&gt;
&lt;br /&gt;
== Reboot process ==&lt;br /&gt;
* Start maintenance mode for the Proxmox node and any containers running on the node&lt;br /&gt;
* Start maintenance mode for Ceph, specify that we only want to surpress the trigger for health state being in warning by setting tag `ceph_health` equals `warning`&lt;br /&gt;
* Let affected parties know that the mainenance period you told them about in the preflight checks is about to take place.&lt;br /&gt;
[[File:Ceph-maintenance.png|thumb]]&lt;br /&gt;
* Set noout flag on host: `ceph osd set-group noout &amp;lt;node&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
# gain ssh access to host&lt;br /&gt;
# Log in through IPA&lt;br /&gt;
# Run the command&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Reboot&#039;&#039;&#039; node through web GUI&lt;br /&gt;
* Wait for node to come back up&lt;br /&gt;
* Wait for OSD&#039;s to be back online&lt;br /&gt;
* Remove noout flag on host: `ceph osd unset-group noout &amp;lt;node&amp;gt;` ,to do this:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If a kernel update was done, manually execute the `Operating system` item manually to detect the update. Manually executing the two items that indicate a reboot is also usefull if they were firing, to stop them/check no further reboots are needed.&lt;br /&gt;
* Ackowledge &amp;amp; close triggers&lt;br /&gt;
* Remove maintenance modes&lt;br /&gt;
&lt;br /&gt;
== Aftercare ==&lt;br /&gt;
* Ensure that Kaboom API is running on Screwdriver or Paloma. This is to get the best performance for the VM.&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=WS_Proxmox_node_reboot&amp;diff=415</id>
		<title>WS Proxmox node reboot</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=WS_Proxmox_node_reboot&amp;diff=415"/>
		<updated>2024-08-12T13:24:29Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Tips &amp;amp; Notes ==&lt;br /&gt;
* If you&#039;re expecting to reboot every node in the cluster, do the node with the containers last, to limit the amount of downtime and reboots for them&lt;br /&gt;
* Updating a node: `apt update` and `apt full-upgrade`&lt;br /&gt;
* Make sure all VMs are actually migratable before adding to a HA group&lt;br /&gt;
* If there are containers on the device you are looking to reboot- you are going to need to also create a maintenance mode to cover them (for example teamspeak or awstats)&lt;br /&gt;
* Containers will inherit the OS of their host, so you will also need to handle triggers related to their OS updating, where appropriate&lt;br /&gt;
== Pre-Work ==&lt;br /&gt;
* If a VM or container is going to incur downtime, you must let the affected parties know in advance. Ideally they should be informed the previous day.  &lt;br /&gt;
&lt;br /&gt;
== Pre flight checks ==&lt;br /&gt;
* Check all Ceph pools are running on at least 3/2 replication&lt;br /&gt;
* Check that all running VM&#039;s on the node you want to reboot are in HA (if not, add them or migrate them away manually)&lt;br /&gt;
* Check that Ceph is healthy -&amp;gt; No remapped PG&#039;s, or degraded data redundancy&lt;br /&gt;
* You have communicated that downtime is expected to the users who will be affected (Ideally one day in advance)&lt;br /&gt;
&lt;br /&gt;
== Update Process ==&lt;br /&gt;
* Update the node: `apt update` and `apt full-upgrade`&lt;br /&gt;
* Check the packages that are removed/updated/installed correctly and they are the sane (to make sense)&lt;br /&gt;
&lt;br /&gt;
== Reboot process ==&lt;br /&gt;
* Start maintenance mode for the Proxmox node and any containers running on the node&lt;br /&gt;
* Start maintenance mode for Ceph, specify that we only want to surpress the trigger for health state being in warning by setting tag `ceph_health` equals `warning`&lt;br /&gt;
* Let affected parties know that the mainenance period you told them about in the preflight checks is about to take place.&lt;br /&gt;
[[File:Ceph-maintenance.png|thumb]]&lt;br /&gt;
* Set noout flag on host: `ceph osd set-group noout &amp;lt;node&amp;gt;`&lt;br /&gt;
* &#039;&#039;&#039;Reboot&#039;&#039;&#039; node through web GUI&lt;br /&gt;
* Wait for node to come back up&lt;br /&gt;
* Wait for OSD&#039;s to be back online&lt;br /&gt;
* Remove noout flag on host: `ceph osd unset-group noout &amp;lt;node&amp;gt;` ,to do this:&lt;br /&gt;
&lt;br /&gt;
# gain ssh access to host&lt;br /&gt;
# Log in through IPA&lt;br /&gt;
# Run the command&lt;br /&gt;
&lt;br /&gt;
* If a kernel update was done, manually execute the `Operating system` item manually to detect the update. Manually executing the two items that indicate a reboot is also usefull if they were firing, to stop them/check no further reboots are needed.&lt;br /&gt;
* Ackowledge &amp;amp; close triggers&lt;br /&gt;
* Remove maintenance modes&lt;br /&gt;
&lt;br /&gt;
== Aftercare ==&lt;br /&gt;
* Ensure that Kaboom API is running on Screwdriver or Paloma. This is to get the best performance for the VM.&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Rebooting_Offsite&amp;diff=414</id>
		<title>Rebooting Offsite</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Rebooting_Offsite&amp;diff=414"/>
		<updated>2024-08-09T15:28:21Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Pre-flight checks =&lt;br /&gt;
* Access to the Web GUI&lt;br /&gt;
* Access to the terminal through SSH&lt;br /&gt;
* Zulip and Gitlab are running on it, so needed to communicate first on Organisational channel, to make sure everyone has read and agreed on it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Rebooting =&lt;br /&gt;
# Make sure all the pre flight checks are done&lt;br /&gt;
# Make an schedule for doing the rebootng.&lt;br /&gt;
# Create a maintenance period on Zabbix with all the vms are running on Offsite and Offsite itself&lt;br /&gt;
# Make sure the maintenance is active (yellow icon)&lt;br /&gt;
# Update the packages through terminal `apt update` and `apt upgrade`&lt;br /&gt;
# Make sure that the packages are removed, upgraded and installed correctly and check them&lt;br /&gt;
# Click the reboot button for Offsite in the web GUI&lt;br /&gt;
# Make sure offsite is actually shutting down&lt;br /&gt;
# Wait till it comes online again&lt;br /&gt;
# On Zabbix, go to &#039;Configuration&#039; -&amp;gt; &#039;all the vms&#039;, and click on &#039;items&#039; for the rebooted vms.&lt;br /&gt;
# Search for &#039;reboot&#039;, select the checkbox for both items and click on &#039;Execute now&#039;.&lt;br /&gt;
# Search for &#039;operating&#039;, select the checkbox for the item that has a trigger that uses it, and select &#039;Execugte now&#039;.&lt;br /&gt;
# Go to &#039;Monitoring&#039; -&amp;gt; &#039;Problems&#039;. Check the checkbox for triggers that fired for this host. Expected are &#039;&amp;lt;host&amp;gt; has been restarted&#039; and &#039;Operating system description has changed&#039;. Scroll to the bottom and click on &#039;Mass update&#039;. In the modal, check the checkbox for &#039;close problem&#039;, &#039;acknowledge&#039;, and in the &#039;message&#039; box write the reason the host was rebooted. For example &#039;host was rebooted due to kernel update&#039;&lt;br /&gt;
# Remove the maintenance period from the vm&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Rebooting_Offsite&amp;diff=413</id>
		<title>Rebooting Offsite</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Rebooting_Offsite&amp;diff=413"/>
		<updated>2024-08-09T15:21:50Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Pre-flight checks =&lt;br /&gt;
* Access to the Web GUI&lt;br /&gt;
* Access to the terminal through SSH&lt;br /&gt;
* Zulip and Gitlab are running on it, so needed to communicate first on Organisational channel, to make sure everyone has read and agreed on it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Rebooting =&lt;br /&gt;
# Make sure all the pre flight checks are done&lt;br /&gt;
# Make an schedule for doing the rebootng.&lt;br /&gt;
# Create a maintenance period on Zabbix with all the vms are running on Offsite and Offsite itself&lt;br /&gt;
# Make sure the maintenance is active (yellow icon)&lt;br /&gt;
# Update the packages through terminal `apt update` and `apt upgrade`&lt;br /&gt;
# Make sure that the packages are removed, upgraded and installed correctly and check them&lt;br /&gt;
# Click the reboot button for Offsite in the web GUI&lt;br /&gt;
# Check if it is actually shut down&lt;br /&gt;
# Wait till it comes online again&lt;br /&gt;
# On Zabbix, go to &#039;Configuration&#039; -&amp;gt; &#039;all the vms&#039;, and click on &#039;items&#039; for the rebooted vms.&lt;br /&gt;
# Search for &#039;reboot&#039;, select the checkbox for both items and click on &#039;Execute now&#039;.&lt;br /&gt;
# Search for &#039;operating&#039;, select the checkbox for the item that has a trigger that uses it, and select &#039;Execugte now&#039;.&lt;br /&gt;
# Go to &#039;Monitoring&#039; -&amp;gt; &#039;Problems&#039;. Check the checkbox for triggers that fired for this host. Expected are &#039;&amp;lt;host&amp;gt; has been restarted&#039; and &#039;Operating system description has changed&#039;. Scroll to the bottom and click on &#039;Mass update&#039;. In the modal, check the checkbox for &#039;close problem&#039;, &#039;acknowledge&#039;, and in the &#039;message&#039; box write the reason the host was rebooted. For example &#039;host was rebooted due to kernel update&#039;&lt;br /&gt;
# Remove the maintenance period from the vm&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Rebooting_Offsite&amp;diff=412</id>
		<title>Rebooting Offsite</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Rebooting_Offsite&amp;diff=412"/>
		<updated>2024-08-09T15:19:51Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Pre-flight checks =&lt;br /&gt;
* Access to the Web GUI&lt;br /&gt;
* Access to the terminal through SSH&lt;br /&gt;
* Zulip and Gitlab are running on it, so needed to communicate first on Organisational channel, to make sure everyone has read and agreed on it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Rebooting =&lt;br /&gt;
# Make sure all the pre flight checks are done&lt;br /&gt;
# Make an schedule for doing the rebootng.&lt;br /&gt;
# Create a maintenance period on Zabbix with all the vms are running on Offsite and Offsite itself&lt;br /&gt;
# Make sure the maintenance is active (yellow icon)&lt;br /&gt;
# Update the packages through terminal `apt update` and `apt upgrade`&lt;br /&gt;
# Make sure that the packages are removed, upgraded and installed correctly and check them&lt;br /&gt;
# Click the reboot button for the Offsite in the web GUI&lt;br /&gt;
# Check if it is actually shut down&lt;br /&gt;
# Wait till it comes online again&lt;br /&gt;
# On Zabbix, go to &#039;Configuration&#039; -&amp;gt; &#039;all the vms&#039;, and click on &#039;items&#039; for the rebooted vms.&lt;br /&gt;
# Search for &#039;reboot&#039;, select the checkbox for both items and click on &#039;Execute now&#039;.&lt;br /&gt;
# Search for &#039;operating&#039;, select the checkbox for the item that has a trigger that uses it, and select &#039;Execugte now&#039;.&lt;br /&gt;
# Go to &#039;Monitoring&#039; -&amp;gt; &#039;Problems&#039;. Check the checkbox for triggers that fired for this host. Expected are &#039;&amp;lt;host&amp;gt; has been restarted&#039; and &#039;Operating system description has changed&#039;. Scroll to the bottom and click on &#039;Mass update&#039;. In the modal, check the checkbox for &#039;close problem&#039;, &#039;acknowledge&#039;, and in the &#039;message&#039; box write the reason the host was rebooted. For example &#039;host was rebooted due to kernel update&#039;&lt;br /&gt;
# Remove the maintenance period from the vm&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Rebooting_Offsite&amp;diff=411</id>
		<title>Rebooting Offsite</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Rebooting_Offsite&amp;diff=411"/>
		<updated>2024-08-08T11:46:44Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Created page with &amp;quot;= Pre-flight checks = * Access to the Web GUI * Access to the terminal through SSH * Zulip and Gitlab are running on it, so needed to communicate first on Organisational channel, to make sure everyone has read and agreed on it.   = Rebooting = # Make sure all the pre flight checks are done # Make an schedule for doing the rebootng. # Create a maintenance period on Zabbix with all the vms are running on Offsite and Offsite itself # Update the packages through terminal `ap...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Pre-flight checks =&lt;br /&gt;
* Access to the Web GUI&lt;br /&gt;
* Access to the terminal through SSH&lt;br /&gt;
* Zulip and Gitlab are running on it, so needed to communicate first on Organisational channel, to make sure everyone has read and agreed on it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Rebooting =&lt;br /&gt;
# Make sure all the pre flight checks are done&lt;br /&gt;
# Make an schedule for doing the rebootng.&lt;br /&gt;
# Create a maintenance period on Zabbix with all the vms are running on Offsite and Offsite itself&lt;br /&gt;
# Update the packages through terminal `apt update` and `apt upgrade`&lt;br /&gt;
# Make sure that the packages are removed, upgraded and installed correctly and check them&lt;br /&gt;
# Click the reboot button for the Offsite in the web GUI&lt;br /&gt;
# Check if it is actually shut down&lt;br /&gt;
# Wait till it comes online again&lt;br /&gt;
# On Zabbix, go to &#039;Configuration&#039; -&amp;gt; &#039;all the vms&#039;, and click on &#039;items&#039; for the rebooted vms.&lt;br /&gt;
# Search for &#039;reboot&#039;, select the checkbox for both items and click on &#039;Execute now&#039;.&lt;br /&gt;
# Search for &#039;operating&#039;, select the checkbox for the item that has a trigger that uses it, and select &#039;Execugte now&#039;.&lt;br /&gt;
# Go to &#039;Monitoring&#039; -&amp;gt; &#039;Problems&#039;. Check the checkbox for triggers that fired for this host. Expected are &#039;&amp;lt;host&amp;gt; has been restarted&#039; and &#039;Operating system description has changed&#039;. Scroll to the bottom and click on &#039;Mass update&#039;. In the modal, check the checkbox for &#039;close problem&#039;, &#039;acknowledge&#039;, and in the &#039;message&#039; box write the reason the host was rebooted. For example &#039;host was rebooted due to kernel update&#039;&lt;br /&gt;
# Remove the maintenance period from the vm&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=410</id>
		<title>Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=410"/>
		<updated>2024-08-08T11:20:06Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Finance ==&lt;br /&gt;
&lt;br /&gt;
=== Exact ===&lt;br /&gt;
&lt;br /&gt;
* [[booking bonus|Booking bonus]]&lt;br /&gt;
* [[booking wages|Booking wages]]&lt;br /&gt;
* [[booking quarterly hosting invoice|Booking quarterly hosting invoice]]&lt;br /&gt;
* [[new receipt|Enter a new receipt]]&lt;br /&gt;
* [[reconciliation|Reconciliation of transaction]]&lt;br /&gt;
* [[invoicing|Send an invoice]]&lt;br /&gt;
* [[payment reminders|Send payment reminder]]&lt;br /&gt;
* [[invoice approval|Process for approving invoices (/filed receipts)]]&lt;br /&gt;
&lt;br /&gt;
=== Bunq ===&lt;br /&gt;
&lt;br /&gt;
* [[top up account|Top up expense account]]&lt;br /&gt;
&lt;br /&gt;
== Work Process ==&lt;br /&gt;
&lt;br /&gt;
* [[Definition of done|Definition of Done]]&lt;br /&gt;
* [[Incident Handling|Incident Handling]]&lt;br /&gt;
* [[SRE Maintenance|SRE Maintenance]]&lt;br /&gt;
&lt;br /&gt;
== Internal Process ==&lt;br /&gt;
* [[timetracking|Timetracking process]]&lt;br /&gt;
* [[Starting work for a new client]]&lt;br /&gt;
* [[12 percent|12% time]]&lt;br /&gt;
* [[Annual leave|Annual leave]]&lt;br /&gt;
* [[Bonus allocation|Bonus allocation]]&lt;br /&gt;
* [[Calamity leave|Calamity leave]]&lt;br /&gt;
* [[Overtime|Overtime]]&lt;br /&gt;
* [[Retrospectives|Retrospectives]]&lt;br /&gt;
* [[Sick leave|Sick leave]]&lt;br /&gt;
* [[Training and self-study|Training and Self-Study]]&lt;br /&gt;
* [[Daily|Daily]]&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
* Era Inventory [[project_era_inventory_api|API Description]]&lt;br /&gt;
&lt;br /&gt;
== SRE ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;To be further populated with guide from drive&#039;&#039;&lt;br /&gt;
* [[create gitlab runner host|Create a GitLab runner host]]&lt;br /&gt;
* [[vm setup|Create a (Debian) VM]]&lt;br /&gt;
* [[border update|Process for updating a border]]&lt;br /&gt;
* [[border reboot|Reboot border without downtime]]&lt;br /&gt;
* [[WS Proxmox node reboot|Reboot WS Proxmox node without downtime]]&lt;br /&gt;
* [[Resize VM Disk]]&lt;br /&gt;
* [[SRE tools]]&lt;br /&gt;
* [[Enroll Mac in Kerberos]]&lt;br /&gt;
* [[Creating a VM on Hetzner]]&lt;br /&gt;
* [[Rebooting VM]]&lt;br /&gt;
* [[Rebooting Offsite]]&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* [[stack|Greenfield stack]]&lt;br /&gt;
* [[standard tools|Standard Tools]]&lt;br /&gt;
* [[list of unfurl debuggers|List of unfurl debuggers]]&lt;br /&gt;
* [[Recommended suppliers]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=385</id>
		<title>Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=385"/>
		<updated>2024-07-17T10:16:04Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Finance ==&lt;br /&gt;
&lt;br /&gt;
=== Exact ===&lt;br /&gt;
&lt;br /&gt;
* [[booking bonus|Booking bonus]]&lt;br /&gt;
* [[booking wages|Booking wages]]&lt;br /&gt;
* [[booking quarterly hosting invoice|Booking quarterly hosting invoice]]&lt;br /&gt;
* [[new receipt|Enter a new receipt]]&lt;br /&gt;
* [[reconciliation|Reconciliation of transaction]]&lt;br /&gt;
* [[invoicing|Send an invoice]]&lt;br /&gt;
* [[payment reminders|Send payment reminder]]&lt;br /&gt;
* [[invoice approval|Process for approving invoices (/filed receipts)]]&lt;br /&gt;
&lt;br /&gt;
=== Bunq ===&lt;br /&gt;
&lt;br /&gt;
* [[top up account|Top up expense account]]&lt;br /&gt;
&lt;br /&gt;
== Work Process ==&lt;br /&gt;
&lt;br /&gt;
* [[Definition of done|Definition of Done]]&lt;br /&gt;
* [[Incident Handling|Incident Handling]]&lt;br /&gt;
* [[SRE Maintenance|SRE Maintenance]]&lt;br /&gt;
&lt;br /&gt;
== Internal Process ==&lt;br /&gt;
* [[Starting work for a new client]]&lt;br /&gt;
* [[12 percent|12% time]]&lt;br /&gt;
* [[Annual leave|Annual leave]]&lt;br /&gt;
* [[Bonus allocation|Bonus allocation]]&lt;br /&gt;
* [[Calamity leave|Calamity leave]]&lt;br /&gt;
* [[Overtime|Overtime]]&lt;br /&gt;
* [[Retrospectives|Retrospectives]]&lt;br /&gt;
* [[Sick leave|Sick leave]]&lt;br /&gt;
* [[Training and self-study|Training and Self-Study]]&lt;br /&gt;
* [[Daily|Daily]]&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
* Era Inventory [[project_era_inventory_api|API Description]]&lt;br /&gt;
&lt;br /&gt;
== SRE ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;To be further populated with guide from drive&#039;&#039;&lt;br /&gt;
* [[create gitlab runner host|Create a GitLab runner host]]&lt;br /&gt;
* [[vm setup|Create a (Debian) VM]]&lt;br /&gt;
* [[border update|Process for updating a border]]&lt;br /&gt;
* [[border reboot|Reboot border without downtime]]&lt;br /&gt;
* [[WS Proxmox node reboot|Reboot WS Proxmox node without downtime]]&lt;br /&gt;
* [[Resize VM Disk]]&lt;br /&gt;
* [[SRE tools]]&lt;br /&gt;
* [[Enroll Mac in Kerberos]]&lt;br /&gt;
* [[Creating a VM on Hetzner]]&lt;br /&gt;
* [[Rebooting VM]]&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* [[stack|Greenfield stack]]&lt;br /&gt;
* [[standard tools|Standard Tools]]&lt;br /&gt;
* [[list of unfurl debuggers|List of unfurl debuggers]]&lt;br /&gt;
* [[Recommended suppliers]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Border_reboot&amp;diff=383</id>
		<title>Border reboot</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Border_reboot&amp;diff=383"/>
		<updated>2024-07-15T13:13:51Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Note: Throughout this guide &amp;lt;ipv4&amp;gt; and &amp;lt;ipv6&amp;gt; are to be replaced by the correct IP&#039;s.&lt;br /&gt;
&lt;br /&gt;
== Preflight checks ==&lt;br /&gt;
These checks are to be done on the other border (So the border that will stay online). The commands are to be invoked in `vtysh`.&lt;br /&gt;
* Confirm our IPv4 block is announced over BGP with `show ip bgp neighbors &amp;lt;ipv4&amp;gt; advertised-routes`&lt;br /&gt;
* Confirm our IPv6 block is announced over BGP with `show bgp neighbors &amp;lt;ipv6&amp;gt; advertised-routes`&lt;br /&gt;
* Confirm that the border receives the ROUTED IPv4 routes from the router with `show ip route`&lt;br /&gt;
* Confirm that the border received the ROUTED &amp;amp; LAN IPv6 routes from the router with `show ipv6 route`&lt;br /&gt;
* Set a maintenance period for the host on Zabbix.&lt;br /&gt;
* Post in the Zulip in the relevant topic (incident&#039;s topic / &#039;SRE - General&#039; stream) that the border is going to be rebooted.&lt;br /&gt;
&lt;br /&gt;
== Disabling routing through a border ==&lt;br /&gt;
On a border in `vtysh`, update the running configuration by invoking the following:&lt;br /&gt;
&lt;br /&gt;
* config&lt;br /&gt;
* router bgp&lt;br /&gt;
* neighbor &amp;lt;ipv4&amp;gt; shutdown&lt;br /&gt;
* neighbor &amp;lt;ipv6&amp;gt; shutdown&lt;br /&gt;
* exit&lt;br /&gt;
* router ospf&lt;br /&gt;
* no default-information originate&lt;br /&gt;
* exit&lt;br /&gt;
* router ospf6&lt;br /&gt;
* no default-information originate&lt;br /&gt;
* exit&lt;br /&gt;
* exit&lt;br /&gt;
* exit&lt;br /&gt;
&lt;br /&gt;
== Reboot the border ==&lt;br /&gt;
&lt;br /&gt;
* After performing the pre-flight checks and disabling the routing, you can choose to wait until traffic has decreased (e.g. using `bmon` to check bandwidth used on interfaces)&lt;br /&gt;
* Execute `reboot` command&lt;br /&gt;
* When the border is back online, execute relavant items (system uptime, operating system, reboot required) to ensure these will not activate a trigger after disabling maintenance mode&lt;br /&gt;
* If you do not expect any Zabbix alert related to the reboot to be fired, delete the maintenance period&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
Undoing the shutdown of the neighbors can be done by invoking `no neighbor &amp;lt;ipv4&amp;gt;/&amp;lt;ipv6&amp;gt; shutdown` in the `router bgp` part of the configuration.&lt;br /&gt;
&lt;br /&gt;
And the `no default-information originate` can be undone by invoking `default-information originate` in the corect ospf part of the configuration (ospf or ospf6, depending on which one you wish to re-enable).&lt;br /&gt;
&lt;br /&gt;
A reload/restart of the service will also reset to normal configuration.&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=User:Tannaz&amp;diff=371</id>
		<title>User:Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=User:Tannaz&amp;diff=371"/>
		<updated>2024-07-04T13:15:52Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Creating_a_VM_on_Hetzner&amp;diff=370</id>
		<title>Creating a VM on Hetzner</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Creating_a_VM_on_Hetzner&amp;diff=370"/>
		<updated>2024-07-04T13:15:43Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Created page with &amp;quot;== Creating a Virtual Server (VM) on Hetzner == === Step 1: Generate an RSA Key Pair === First, you need to create an RSA key pair to have a public key for your server. &amp;gt;&amp;gt; ssh-keygen -t rsa  === Step 2: Create an account on Hetzner to get access to their development cloud === === Step 3: Set Up a Virtual Server on Hetzner === Ensure you have access to create a virtual server on Hetzner. Follow the instructions in this article to create a cloud server on the Hetzner Cloud...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Creating a Virtual Server (VM) on Hetzner ==&lt;br /&gt;
=== Step 1: Generate an RSA Key Pair ===&lt;br /&gt;
First, you need to create an RSA key pair to have a public key for your server.&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keygen -t rsa&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Create an account on Hetzner to get access to their development cloud ===&lt;br /&gt;
=== Step 3: Set Up a Virtual Server on Hetzner ===&lt;br /&gt;
Ensure you have access to create a virtual server on Hetzner. Follow the instructions in this article to create a cloud server on the Hetzner Cloud Console: Creating a Server on Hetzner.&lt;br /&gt;
=== Step 4: Configure Your Server ===&lt;br /&gt;
You will likely need both IPv4 and IPv6 addresses for your VM, so make sure to configure both.&lt;br /&gt;
Choose the latest version of Debian as your operating system and complete the configuration based on your needs. Name the server properly using the format &amp;lt;your-name server&amp;gt;.dev.delftsolutions.nl, replacing &amp;lt;your-name&amp;gt; with your actual name server.&lt;br /&gt;
Once configured, purchase your virtual server. This will take you to the new virtual server page, where you can see both your IPv4 and IPv6 addresses.&lt;br /&gt;
=== Step 5: Update DNS Repository ===&lt;br /&gt;
Add your IPv4 and IPv6 addresses to the DNS-basic repository and create a merge request.&lt;br /&gt;
For the IPv6 make sure “/64” at the end is replaced by 1 just like other IPv6s in the script. And try to make the format like the other lines.&lt;br /&gt;
Make sure &lt;br /&gt;
=== Step 6: Generate a Fingerprint ===&lt;br /&gt;
Your VM will generate a fingerprint to recognize you. This fingerprint acts as a unique identifier for the server and is stored locally.&lt;br /&gt;
Use the following command to add the fingerprint to the DNS system:&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keyscan -D &amp;lt;your-virtual-server-IPv4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will generate a fingerprint that you need to add to the DNS-basic repository (make sure to remove any lines starting with a semicolon and replace the first IP to your name, again just like the other lines in there).&lt;br /&gt;
=== Step 7: Test Your Virtual Server ===&lt;br /&gt;
To ensure your virtual server is working correctly, use these commands:&lt;br /&gt;
&amp;gt;&amp;gt; dig @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt; @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands will verify if your name server is functioning properly and if you have access to ns1.delftsolutions.nl. But these commands are fun to do, they are not necessary :)&lt;br /&gt;
=== Step 8: SSH into Your Virtual Server === &lt;br /&gt;
Once everything is verified and functioning correctly, you can SSH into your virtual server using the following command:&lt;br /&gt;
&amp;gt;&amp;gt; ssh root@&amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &amp;lt;your-name-server&amp;gt; with the actual name of your server, using root before your name server means you want to be root to make changes in your virtual server. &lt;br /&gt;
When it is your first time to get access to your virtual server it will ask you “Are you sure you want to continue connecting (yes/no/[fingerprint])?” and when you say yes, you will be permanently added to the host, and you need to enter the passphrase of your RSA key pair. &lt;br /&gt;
Then boom, you have your virtual server :)&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=369</id>
		<title>Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Internal&amp;diff=369"/>
		<updated>2024-07-04T13:15:33Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Finance ==&lt;br /&gt;
&lt;br /&gt;
=== Exact ===&lt;br /&gt;
&lt;br /&gt;
* [[booking bonus|Booking bonus]]&lt;br /&gt;
* [[booking wages|Booking wages]]&lt;br /&gt;
* [[new receipt|Enter a new receipt]]&lt;br /&gt;
* [[reconciliation|Reconciliation of transaction]]&lt;br /&gt;
* [[invoicing|Send an invoice]]&lt;br /&gt;
* [[payment reminders|Send payment reminder]]&lt;br /&gt;
* [[invoice approval|Process for approving invoices (/filed receipts)]]&lt;br /&gt;
&lt;br /&gt;
=== Bunq ===&lt;br /&gt;
&lt;br /&gt;
* [[top up account|Top up expense account]]&lt;br /&gt;
&lt;br /&gt;
== Work Process ==&lt;br /&gt;
&lt;br /&gt;
* [[Definition of done|Definition of Done]]&lt;br /&gt;
* [[Incident Handling|Incident Handling]]&lt;br /&gt;
* [[SRE Maintenance|SRE Maintenance]]&lt;br /&gt;
&lt;br /&gt;
== Internal Process ==&lt;br /&gt;
* [[Starting work for a new client]]&lt;br /&gt;
* [[12 percent|12% time]]&lt;br /&gt;
* [[Annual leave|Annual leave]]&lt;br /&gt;
* [[Bonus allocation|Bonus allocation]]&lt;br /&gt;
* [[Calamity leave|Calamity leave]]&lt;br /&gt;
* [[Overtime|Overtime]]&lt;br /&gt;
* [[Retrospectives|Retrospectives]]&lt;br /&gt;
* [[Sick leave|Sick leave]]&lt;br /&gt;
* [[Training and self-study|Training and Self-Study]]&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
* Era Inventory [[project_era_inventory_api|API Description]]&lt;br /&gt;
&lt;br /&gt;
== SRE ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;To be further populated with guide from drive&#039;&#039;&lt;br /&gt;
* [[create gitlab runner host|Create a GitLab runner host]]&lt;br /&gt;
* [[vm setup|Create a (Debian) VM]]&lt;br /&gt;
* [[border reboot|Reboot border without downtime]]&lt;br /&gt;
* [[WS Proxmox node reboot|Reboot WS Proxmox node without downtime]]&lt;br /&gt;
* [[Resize VM Disk]]&lt;br /&gt;
* [[SRE tools]]&lt;br /&gt;
* [[Enroll Mac in Kerberos]]&lt;br /&gt;
* [[Creating a VM on Hetzner]]&lt;br /&gt;
&lt;br /&gt;
== Other ==&lt;br /&gt;
&lt;br /&gt;
* [[stack|Greenfield stack]]&lt;br /&gt;
* [[standard tools|Standard Tools]]&lt;br /&gt;
* [[list of unfurl debuggers|List of unfurl debuggers]]&lt;br /&gt;
* [[Recommended suppliers]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=User:Tannaz&amp;diff=368</id>
		<title>User:Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=User:Tannaz&amp;diff=368"/>
		<updated>2024-07-04T13:07:17Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Creating a Virtual Server (VM) on Hetzner ==&lt;br /&gt;
=== Step 1: Generate an RSA Key Pair ===&lt;br /&gt;
First, you need to create an RSA key pair to have a public key for your server.&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keygen -t rsa&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Create an account on Hetzner to get access to their development cloud ===&lt;br /&gt;
=== Step 3: Set Up a Virtual Server on Hetzner ===&lt;br /&gt;
Ensure you have access to create a virtual server on Hetzner. Follow the instructions in this article to create a cloud server on the Hetzner Cloud Console: Creating a Server on Hetzner.&lt;br /&gt;
=== Step 4: Configure Your Server ===&lt;br /&gt;
You will likely need both IPv4 and IPv6 addresses for your VM, so make sure to configure both.&lt;br /&gt;
Choose the latest version of Debian as your operating system and complete the configuration based on your needs. Name the server properly using the format &amp;lt;your-name server&amp;gt;.dev.delftsolutions.nl, replacing &amp;lt;your-name&amp;gt; with your actual name server.&lt;br /&gt;
Once configured, purchase your virtual server. This will take you to the new virtual server page, where you can see both your IPv4 and IPv6 addresses.&lt;br /&gt;
=== Step 5: Update DNS Repository ===&lt;br /&gt;
Add your IPv4 and IPv6 addresses to the DNS-basic repository and create a merge request.&lt;br /&gt;
For the IPv6 make sure “/64” at the end is replaced by 1 just like other IPv6s in the script. And try to make the format like the other lines.&lt;br /&gt;
Make sure &lt;br /&gt;
=== Step 6: Generate a Fingerprint ===&lt;br /&gt;
Your VM will generate a fingerprint to recognize you. This fingerprint acts as a unique identifier for the server and is stored locally.&lt;br /&gt;
Use the following command to add the fingerprint to the DNS system:&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keyscan -D &amp;lt;your-virtual-server-IPv4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will generate a fingerprint that you need to add to the DNS-basic repository (make sure to remove any lines starting with a semicolon and replace the first IP to your name, again just like the other lines in there).&lt;br /&gt;
=== Step 7: Test Your Virtual Server ===&lt;br /&gt;
To ensure your virtual server is working correctly, use these commands:&lt;br /&gt;
&amp;gt;&amp;gt; dig @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt; @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands will verify if your name server is functioning properly and if you have access to ns1.delftsolutions.nl. But these commands are fun to do, they are not necessary :)&lt;br /&gt;
=== Step 8: SSH into Your Virtual Server === &lt;br /&gt;
Once everything is verified and functioning correctly, you can SSH into your virtual server using the following command:&lt;br /&gt;
&amp;gt;&amp;gt; ssh root@&amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &amp;lt;your-name-server&amp;gt; with the actual name of your server, using root before your name server means you want to be root to make changes in your virtual server. &lt;br /&gt;
When it is your first time to get access to your virtual server it will ask you “Are you sure you want to continue connecting (yes/no/[fingerprint])?” and when you say yes, you will be permanently added to the host, and you need to enter the passphrase of your RSA key pair. &lt;br /&gt;
Then boom, you have your virtual server :)&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Tannaz&amp;diff=367</id>
		<title>Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Tannaz&amp;diff=367"/>
		<updated>2024-07-04T13:06:50Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Tannaz&amp;diff=366</id>
		<title>DelftSolutions:Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Tannaz&amp;diff=366"/>
		<updated>2024-07-04T11:48:59Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=User:Tannaz&amp;diff=365</id>
		<title>User:Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=User:Tannaz&amp;diff=365"/>
		<updated>2024-07-04T11:48:31Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Tannaz&amp;diff=364</id>
		<title>DelftSolutions:Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Tannaz&amp;diff=364"/>
		<updated>2024-07-04T11:45:34Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Tannaz moved page DelftSolutions:Tannaz to Tannaz&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Tannaz]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Tannaz&amp;diff=363</id>
		<title>Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Tannaz&amp;diff=363"/>
		<updated>2024-07-04T11:45:34Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Tannaz moved page DelftSolutions:Tannaz to Tannaz&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[DelftSolutions:Internal]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Tannaz&amp;diff=362</id>
		<title>Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Tannaz&amp;diff=362"/>
		<updated>2024-07-04T11:44:17Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Tannaz moved page DelftSolutions:Tannaz to DelftSolutions:Internal&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[DelftSolutions:Internal]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Internal&amp;diff=361</id>
		<title>DelftSolutions:Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Internal&amp;diff=361"/>
		<updated>2024-07-04T11:44:17Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Tannaz moved page DelftSolutions:Tannaz to DelftSolutions:Internal&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Creating a Virtual Server (VM) on Hetzner ==&lt;br /&gt;
=== Step 1: Generate an RSA Key Pair ===&lt;br /&gt;
First, you need to create an RSA key pair to have a public key for your server.&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keygen -t rsa&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Create an account on Hetzner to get access to their development cloud ===&lt;br /&gt;
=== Step 3: Set Up a Virtual Server on Hetzner ===&lt;br /&gt;
Ensure you have access to create a virtual server on Hetzner. Follow the instructions in this article to create a cloud server on the Hetzner Cloud Console: Creating a Server on Hetzner.&lt;br /&gt;
=== Step 4: Configure Your Server ===&lt;br /&gt;
You will likely need both IPv4 and IPv6 addresses for your VM, so make sure to configure both.&lt;br /&gt;
Choose the latest version of Debian as your operating system and complete the configuration based on your needs. Name the server properly using the format &amp;lt;your-name server&amp;gt;.dev.delftsolutions.nl, replacing &amp;lt;your-name&amp;gt; with your actual name server.&lt;br /&gt;
Once configured, purchase your virtual server. This will take you to the new virtual server page, where you can see both your IPv4 and IPv6 addresses.&lt;br /&gt;
=== Step 5: Update DNS Repository ===&lt;br /&gt;
Add your IPv4 and IPv6 addresses to the DNS-basic repository and create a merge request.&lt;br /&gt;
For the IPv6 make sure “/64” at the end is replaced by 1 just like other IPv6s in the script. And try to make the format like the other lines.&lt;br /&gt;
Make sure &lt;br /&gt;
=== Step 6: Generate a Fingerprint ===&lt;br /&gt;
Your VM will generate a fingerprint to recognize you. This fingerprint acts as a unique identifier for the server and is stored locally.&lt;br /&gt;
Use the following command to add the fingerprint to the DNS system:&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keyscan -D &amp;lt;your-virtual-server-IPv4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will generate a fingerprint that you need to add to the DNS-basic repository (make sure to remove any lines starting with a semicolon and replace the first IP to your name, again just like the other lines in there).&lt;br /&gt;
=== Step 7: Test Your Virtual Server ===&lt;br /&gt;
To ensure your virtual server is working correctly, use these commands:&lt;br /&gt;
&amp;gt;&amp;gt; dig @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt; @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands will verify if your name server is functioning properly and if you have access to ns1.delftsolutions.nl. But these commands are fun to do, they are not necessary :)&lt;br /&gt;
=== Step 8: SSH into Your Virtual Server === &lt;br /&gt;
Once everything is verified and functioning correctly, you can SSH into your virtual server using the following command:&lt;br /&gt;
&amp;gt;&amp;gt; ssh root@&amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &amp;lt;your-name-server&amp;gt; with the actual name of your server, using root before your name server means you want to be root to make changes in your virtual server. &lt;br /&gt;
When it is your first time to get access to your virtual server it will ask you “Are you sure you want to continue connecting (yes/no/[fingerprint])?” and when you say yes, you will be permanently added to the host, and you need to enter the passphrase of your RSA key pair. &lt;br /&gt;
Then boom, you have your virtual server :)&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Category:Tannaz&amp;diff=360</id>
		<title>Category:Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Category:Tannaz&amp;diff=360"/>
		<updated>2024-07-04T11:43:59Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Tannaz moved page Category:Tannaz to DelftSolutions:Tannaz&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[DelftSolutions:Tannaz]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Internal&amp;diff=359</id>
		<title>DelftSolutions:Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Internal&amp;diff=359"/>
		<updated>2024-07-04T11:43:59Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Tannaz moved page Category:Tannaz to DelftSolutions:Tannaz&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Creating a Virtual Server (VM) on Hetzner ==&lt;br /&gt;
=== Step 1: Generate an RSA Key Pair ===&lt;br /&gt;
First, you need to create an RSA key pair to have a public key for your server.&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keygen -t rsa&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Create an account on Hetzner to get access to their development cloud ===&lt;br /&gt;
=== Step 3: Set Up a Virtual Server on Hetzner ===&lt;br /&gt;
Ensure you have access to create a virtual server on Hetzner. Follow the instructions in this article to create a cloud server on the Hetzner Cloud Console: Creating a Server on Hetzner.&lt;br /&gt;
=== Step 4: Configure Your Server ===&lt;br /&gt;
You will likely need both IPv4 and IPv6 addresses for your VM, so make sure to configure both.&lt;br /&gt;
Choose the latest version of Debian as your operating system and complete the configuration based on your needs. Name the server properly using the format &amp;lt;your-name server&amp;gt;.dev.delftsolutions.nl, replacing &amp;lt;your-name&amp;gt; with your actual name server.&lt;br /&gt;
Once configured, purchase your virtual server. This will take you to the new virtual server page, where you can see both your IPv4 and IPv6 addresses.&lt;br /&gt;
=== Step 5: Update DNS Repository ===&lt;br /&gt;
Add your IPv4 and IPv6 addresses to the DNS-basic repository and create a merge request.&lt;br /&gt;
For the IPv6 make sure “/64” at the end is replaced by 1 just like other IPv6s in the script. And try to make the format like the other lines.&lt;br /&gt;
Make sure &lt;br /&gt;
=== Step 6: Generate a Fingerprint ===&lt;br /&gt;
Your VM will generate a fingerprint to recognize you. This fingerprint acts as a unique identifier for the server and is stored locally.&lt;br /&gt;
Use the following command to add the fingerprint to the DNS system:&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keyscan -D &amp;lt;your-virtual-server-IPv4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will generate a fingerprint that you need to add to the DNS-basic repository (make sure to remove any lines starting with a semicolon and replace the first IP to your name, again just like the other lines in there).&lt;br /&gt;
=== Step 7: Test Your Virtual Server ===&lt;br /&gt;
To ensure your virtual server is working correctly, use these commands:&lt;br /&gt;
&amp;gt;&amp;gt; dig @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt; @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands will verify if your name server is functioning properly and if you have access to ns1.delftsolutions.nl. But these commands are fun to do, they are not necessary :)&lt;br /&gt;
=== Step 8: SSH into Your Virtual Server === &lt;br /&gt;
Once everything is verified and functioning correctly, you can SSH into your virtual server using the following command:&lt;br /&gt;
&amp;gt;&amp;gt; ssh root@&amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &amp;lt;your-name-server&amp;gt; with the actual name of your server, using root before your name server means you want to be root to make changes in your virtual server. &lt;br /&gt;
When it is your first time to get access to your virtual server it will ask you “Are you sure you want to continue connecting (yes/no/[fingerprint])?” and when you say yes, you will be permanently added to the host, and you need to enter the passphrase of your RSA key pair. &lt;br /&gt;
Then boom, you have your virtual server :)&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=User:Tannaz&amp;diff=358</id>
		<title>User:Tannaz</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=User:Tannaz&amp;diff=358"/>
		<updated>2024-07-04T11:43:41Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Tannaz moved page User:Tannaz to Category:Tannaz&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[:Category:Tannaz]]&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Internal&amp;diff=357</id>
		<title>DelftSolutions:Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Internal&amp;diff=357"/>
		<updated>2024-07-04T11:43:40Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Tannaz moved page User:Tannaz to Category:Tannaz&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Creating a Virtual Server (VM) on Hetzner ==&lt;br /&gt;
=== Step 1: Generate an RSA Key Pair ===&lt;br /&gt;
First, you need to create an RSA key pair to have a public key for your server.&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keygen -t rsa&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Create an account on Hetzner to get access to their development cloud ===&lt;br /&gt;
=== Step 3: Set Up a Virtual Server on Hetzner ===&lt;br /&gt;
Ensure you have access to create a virtual server on Hetzner. Follow the instructions in this article to create a cloud server on the Hetzner Cloud Console: Creating a Server on Hetzner.&lt;br /&gt;
=== Step 4: Configure Your Server ===&lt;br /&gt;
You will likely need both IPv4 and IPv6 addresses for your VM, so make sure to configure both.&lt;br /&gt;
Choose the latest version of Debian as your operating system and complete the configuration based on your needs. Name the server properly using the format &amp;lt;your-name server&amp;gt;.dev.delftsolutions.nl, replacing &amp;lt;your-name&amp;gt; with your actual name server.&lt;br /&gt;
Once configured, purchase your virtual server. This will take you to the new virtual server page, where you can see both your IPv4 and IPv6 addresses.&lt;br /&gt;
=== Step 5: Update DNS Repository ===&lt;br /&gt;
Add your IPv4 and IPv6 addresses to the DNS-basic repository and create a merge request.&lt;br /&gt;
For the IPv6 make sure “/64” at the end is replaced by 1 just like other IPv6s in the script. And try to make the format like the other lines.&lt;br /&gt;
Make sure &lt;br /&gt;
=== Step 6: Generate a Fingerprint ===&lt;br /&gt;
Your VM will generate a fingerprint to recognize you. This fingerprint acts as a unique identifier for the server and is stored locally.&lt;br /&gt;
Use the following command to add the fingerprint to the DNS system:&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keyscan -D &amp;lt;your-virtual-server-IPv4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will generate a fingerprint that you need to add to the DNS-basic repository (make sure to remove any lines starting with a semicolon and replace the first IP to your name, again just like the other lines in there).&lt;br /&gt;
=== Step 7: Test Your Virtual Server ===&lt;br /&gt;
To ensure your virtual server is working correctly, use these commands:&lt;br /&gt;
&amp;gt;&amp;gt; dig @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt; @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands will verify if your name server is functioning properly and if you have access to ns1.delftsolutions.nl. But these commands are fun to do, they are not necessary :)&lt;br /&gt;
=== Step 8: SSH into Your Virtual Server === &lt;br /&gt;
Once everything is verified and functioning correctly, you can SSH into your virtual server using the following command:&lt;br /&gt;
&amp;gt;&amp;gt; ssh root@&amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &amp;lt;your-name-server&amp;gt; with the actual name of your server, using root before your name server means you want to be root to make changes in your virtual server. &lt;br /&gt;
When it is your first time to get access to your virtual server it will ask you “Are you sure you want to continue connecting (yes/no/[fingerprint])?” and when you say yes, you will be permanently added to the host, and you need to enter the passphrase of your RSA key pair. &lt;br /&gt;
Then boom, you have your virtual server :)&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Internal&amp;diff=356</id>
		<title>DelftSolutions:Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=DelftSolutions:Internal&amp;diff=356"/>
		<updated>2024-07-04T11:42:15Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: How to create a cloud server on Hetzner&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Creating a Virtual Server (VM) on Hetzner ==&lt;br /&gt;
=== Step 1: Generate an RSA Key Pair ===&lt;br /&gt;
First, you need to create an RSA key pair to have a public key for your server.&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keygen -t rsa&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Create an account on Hetzner to get access to their development cloud ===&lt;br /&gt;
=== Step 3: Set Up a Virtual Server on Hetzner ===&lt;br /&gt;
Ensure you have access to create a virtual server on Hetzner. Follow the instructions in this article to create a cloud server on the Hetzner Cloud Console: Creating a Server on Hetzner.&lt;br /&gt;
=== Step 4: Configure Your Server ===&lt;br /&gt;
You will likely need both IPv4 and IPv6 addresses for your VM, so make sure to configure both.&lt;br /&gt;
Choose the latest version of Debian as your operating system and complete the configuration based on your needs. Name the server properly using the format &amp;lt;your-name server&amp;gt;.dev.delftsolutions.nl, replacing &amp;lt;your-name&amp;gt; with your actual name server.&lt;br /&gt;
Once configured, purchase your virtual server. This will take you to the new virtual server page, where you can see both your IPv4 and IPv6 addresses.&lt;br /&gt;
=== Step 5: Update DNS Repository ===&lt;br /&gt;
Add your IPv4 and IPv6 addresses to the DNS-basic repository and create a merge request.&lt;br /&gt;
For the IPv6 make sure “/64” at the end is replaced by 1 just like other IPv6s in the script. And try to make the format like the other lines.&lt;br /&gt;
Make sure &lt;br /&gt;
=== Step 6: Generate a Fingerprint ===&lt;br /&gt;
Your VM will generate a fingerprint to recognize you. This fingerprint acts as a unique identifier for the server and is stored locally.&lt;br /&gt;
Use the following command to add the fingerprint to the DNS system:&lt;br /&gt;
&amp;gt;&amp;gt; ssh-keyscan -D &amp;lt;your-virtual-server-IPv4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will generate a fingerprint that you need to add to the DNS-basic repository (make sure to remove any lines starting with a semicolon and replace the first IP to your name, again just like the other lines in there).&lt;br /&gt;
=== Step 7: Test Your Virtual Server ===&lt;br /&gt;
To ensure your virtual server is working correctly, use these commands:&lt;br /&gt;
&amp;gt;&amp;gt; dig @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt; @ns1.delftsolutions.nl.&lt;br /&gt;
&amp;gt;&amp;gt; dig &amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These commands will verify if your name server is functioning properly and if you have access to ns1.delftsolutions.nl. But these commands are fun to do, they are not necessary :)&lt;br /&gt;
=== Step 8: SSH into Your Virtual Server === &lt;br /&gt;
Once everything is verified and functioning correctly, you can SSH into your virtual server using the following command:&lt;br /&gt;
&amp;gt;&amp;gt; ssh root@&amp;lt;your-name-server&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace &amp;lt;your-name-server&amp;gt; with the actual name of your server, using root before your name server means you want to be root to make changes in your virtual server. &lt;br /&gt;
When it is your first time to get access to your virtual server it will ask you “Are you sure you want to continue connecting (yes/no/[fingerprint])?” and when you say yes, you will be permanently added to the host, and you need to enter the passphrase of your RSA key pair. &lt;br /&gt;
Then boom, you have your virtual server :)&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Talk:Internal&amp;diff=335</id>
		<title>Talk:Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Talk:Internal&amp;diff=335"/>
		<updated>2024-06-20T11:33:44Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
	<entry>
		<id>https://docs.delftsolutions.nl/index.php?title=Talk:Internal&amp;diff=334</id>
		<title>Talk:Internal</title>
		<link rel="alternate" type="text/html" href="https://docs.delftsolutions.nl/index.php?title=Talk:Internal&amp;diff=334"/>
		<updated>2024-06-20T11:01:04Z</updated>

		<summary type="html">&lt;p&gt;Tannaz: Created page with &amp;quot;== Packets in the Network layer == Packets are small pieces of data that are sent over a network. Data sent over computer networks, such as the Internet, is divided into packets. These packets can take different paths to reach their destination and then recombined by the computer or device that receives them. The network layer is responsible for moving these packets from one place to another. Because data is broken into packets, many people can use the same network at th...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Packets in the Network layer ==&lt;br /&gt;
Packets are small pieces of data that are sent over a network. Data sent over computer networks, such as the Internet, is divided into packets. These packets can take different paths to reach their destination and then recombined by the computer or device that receives them. The network layer is responsible for moving these packets from one place to another. Because data is broken into packets, many people can use the same network at the same time for exchanging data.&lt;br /&gt;
&lt;br /&gt;
=== IP Packets ===&lt;br /&gt;
IP packets are the fundamental units of data that are transmitted across IP (Internet Protocol) networks which is a network that uses the Internet Protocol (IP) to send and receive data, including the Internet, they actually route the packet to its destination. Packets are sometimes defined by the protocol they are using. They contain important information about where a packet is from (its source IP address), where it is going (destination IP address), how large the packet is, and other information about the data.&lt;br /&gt;
==== How Do IP Packet Work ====&lt;br /&gt;
When we send data over the internet, it is broken down into smaller pieces called packets. Each packet is sent independently and may travel different paths to reach the destination. At the destination, the packets are reassembled to form the original data. IP Packets have some components: the header and the payload. A packet header is a &amp;quot;label&amp;quot; of sorts, which provides information about the packet’s contents, origin, and destination.&lt;br /&gt;
&lt;br /&gt;
In a network, data is sent in small pieces called packets. Each packet has different headers added by various protocols, which are rules for formatting and sending data. These headers help ensure the data gets where it needs to go and is understood by any computer on the network. &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Packets often have more than one header. Each header is used by a different part of the networking process.&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Actually protocols are standardized ways to format and handle data so that any computer can understand it. There is some common protocols: &amp;lt;br&amp;gt;&lt;br /&gt;
* Transmission Control Protocol (TCP): Ensures data is sent reliably and in order.&lt;br /&gt;
* Internet Protocol (IP): Routes the packet to its destination&lt;br /&gt;
&lt;br /&gt;
==== IP Components ====&lt;br /&gt;
* Header&lt;br /&gt;
 &lt;br /&gt;
# Version: Indicates the IP version (IPv4 or IPv6).&lt;br /&gt;
#  Header Length: Specifies the length of the header.&lt;br /&gt;
#  Type of Service: Indicates the priority of the packet.&lt;br /&gt;
#  Total Length: The combined length of the header and payload.&lt;br /&gt;
#  Identification: Used to identify fragments of a larger packet.&lt;br /&gt;
#  Flags: Control or identify fragments.&lt;br /&gt;
#  Fragment Offset: Indicates where in the original packet this fragment belongs.&lt;br /&gt;
#  Time to Live (TTL): Limits the packet&#039;s lifetime to prevent it from circulating indefinitely.&lt;br /&gt;
#  Protocol: Indicates the protocol used in the data portion (e.g., TCP, UDP).&lt;br /&gt;
#  Header Checksum: Used for error-checking the header.&lt;br /&gt;
#  Source IP Address: The IP address of the sender.&lt;br /&gt;
#  Destination IP Address: The IP address of the receiver.&lt;br /&gt;
#  Options: Additional fields that provide extra features, optional.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* Payload&lt;br /&gt;
The payload is the actual data being transported, such as part of a file, an email, or a web page. This is the information the sender wants to deliver to the receiver.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
When you send an email, the data is broken into packets. Each packet gets:&lt;br /&gt;
&lt;br /&gt;
* IP Header: Contains the sender&#039;s and receiver&#039;s IP addresses, helping the packet find its destination.&lt;br /&gt;
* TCP Header: Ensures the packet arrives correctly&lt;br /&gt;
&lt;br /&gt;
By adding these headers, protocols make sure that data can travel across the internet smoothly and be reassembled correctly at the destination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Conclusion===&lt;br /&gt;
IP packets are essential for transmitting data across networks. They break down data into manageable chunks, route them efficiently, and ensure they arrive intact and in the correct order. This process is fundamental to the functioning of the Internet and modern digital communications. In summary, an IP network is any network that uses the Internet Protocol to manage data transmission, making it possible for devices to communicate efficiently.&lt;/div&gt;</summary>
		<author><name>Tannaz</name></author>
	</entry>
</feed>