VR Medical training: the virtual simulation for UniUD X-Ray technologists

An extremely interesting project in the serious game portfolio from MOLO17 is, without a doubt, the UniUD – CT Trainer or, with its working title, MIMICT (Mimic + CT), as anticipated in the previous article dedicated to the advantages of using AR / VR for medical training. The project was born from a collaboration between MOLO17 and the University of Udine (Italy) and it is targeted to the X-Ray technologist degree programme.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Demo – UniUD CT Trainer

The final product is a simulation software VR that recreates the experience of executing a broad range of CT (computed tomography) examinations from the X-Ray Technologist point of view, simulating both the patient’s interactions and the scanner usage.

Although, in this specific project, it has not been possible the use of technologies that involve “important” forms of persistence, such as Couchbase, new and different scenarios can also be opened. For example, solutions for remote supervised training can be developed and, for students, exercises and exams can be created directly online whose outcome is verifiable without having to be physically present. This has obvious and interesting implications, as the student is enabled to experience everything from home, a key possibility for any educational institution in the face of the present post-covid situation.

Virtual simulation of X-Ray room
Virtual simulation of X-Ray room

Here we will tell you about the rationale behind this tool’s existence and its peculiarities.

Training of X-Ray Technologist: CT Trainer is the solution

The training course for students of X-Ray Tech course is strongly connected to the hospital context. According to the italian law, some hundreds of internship training in different radiology specialties, units and modalities are mandatory for the degree to be a valid title in order to access the X-Ray Tech Board examination and obtain the professional license.

The exact ways to carry out the internship do vary from a University/Teaching hospital to another, but there are some constant elements, such as the closest possible monitoring from a tutor over one or two of the students that are assigned to him/her, during the normal working hours. The academic year is usually divided into alternating periods, one when the students learn the theory and another when the students undertake their exams and internship.

Pain Points

All of the above creates multiple problems that are hard to manage and that may disrupt the learning experience and quality, while taking a toll on the hospital’s insurance costs.

Law and rules

The usage of X-Ray energies on living patients is tightly regulated and the exercise of sanitary and medical professions without a license is harshly punished. This implies that the student won’t be able to actually deliver the radiation dose on the patient and will be able only to partake in setting up the exam under the tightest possible surveillance of his or her tutor.

Human interactions

The student, entering a radiology ward for the first time, has to quickly learn how to relate with the patient, with other X-Ray techs and other types of medical and healthcare operators. In the meantime he or she has to learn using the machines and the connected devices.

Also the interaction with the patient is strictly regulated by law and it has strong legal implications that contribute to rising insurance costs. 

Machine availability

Some modalities (especially CT and MRI) are a precious resource and are usually always in use for clinical reasons, so they are not available to the students to experiment on them with mannequins and dummies.

Clinical risk and insurance claims

The presence on the field of in-training operators is always considered an additional risk factor from a clinical perspective and regarding possible insurance claims and costs, in an environment that is already one of the riskiest.

Due to the above considerations and criticalities one of the future and present solutions in healthcare is the use of simulative technologies. That being said, classic simulation approaches with mannequins and dummies are not always feasible in radiology due to availability of machinery, costs and risks connected with the use of x-ray energies.

Virtual simulation, the answer is: CT Trainer

As some of our followers already know, I experienced first hand all those problems and pains, having walked the path above from getting the degree and the professional license and ending up being a registered X-Ray Technologist, albeit not practicing the profession at the moment. Together with the heads of the academic course, we more than once envisioned the idea of making a similar tool. Finally, thanks to MOLO17, it became a reality.

The final product, based on the Unity engine, is very similar to an FPS game, where the student can freely move around a model of a CT radiology ward, perfectly fitted with a control room, gantry room, infirmary and waiting room.

The student will meet the patients, he/she will have to pick them up from the waiting room and talk to them with a multiple answer interface, in which some answers will bring negative clinical and legal consequences, other will be only partially correct, and other are the best practice possible.

The patient in the CT machine during the exam
The patient in the CT machine during the exam

After receiving the patient, the student will walk him or her to the infirmary or the machine, depending on the need to apply a cannula for the contrast agent or not, as required by the current exam’s protocol that the student must know.

In the gantry room, the student will tell the patient to lay down in the CT machine, but not before having told him or her to remove the clothing and personal belongings that might interfere with the examination.

The most technically complex and really challenging part for MOLO17’s developers has been the control console in virtual simulation. We simulated a typical CT interface with a multi monitor setup. 

Of the three monitors, two are dedicated to the CT machine, while the third one is the remote control for the automatic contrast agent injector. On the latter, the student is required to set injection speed and dose of saline and iodine, if the exam protocol requires so.

The student will then need to set all the parameters required to execute the exam protocol and image reconstruction settings, learning optimizing the quality of the image, while balancing it with the concept of “as low as reasonably achievable dose”.

CT interface
CT interface

If the controls are like the real thing, the resulting images are even more real. All exams are created using real DICOM images acquired during real exams. Besides that, it is possible to load a CT DICOM dataset already available from a real exam, to let the student experience the said situation. This feature is really useful to let students learn protocols that only seldom performed or to learn how to examine rare clinical cases. Basically, we made a DICOM viewer inside a virtual world.

The user interaction with the images is totally similar to the one that happens on a real machine. After the exam, it is possible to review the images and discuss them with the tutor.

When the examination is complete, the student will tell the patient this in an appropriate manner according to the clinical condition and the activities performed. For example if the patient was administered a contrast agent, the student will send him or her to the infirmary to stay under observation for some time due to possible adverse reactions and will avoid to tell him or her to go home or to go back to the ward.

Our conclusion about CT Trainer

We strongly believe that this tool, that will go live very soon for the actual training of future X-Ray techs at UniUD, will be able to mitigate and potentially solve many of the pain points connected.

The main point here is not to substitute the mandatory and fundamental role of the internship with real patients, which has a human value that cannot be replaced, but instead to enhance its value. We wish that this tool in virtual simulation will help shape even more human-centered healthcare operators that will be able to provide higher and higher standards for healthcare quality and patients’ safety.

For more information on CT Trainer or on the development of virtual simulation environments, contact us on https://molo17.com/.

Smart working & Network Failover during COVID-19

#iorestoacasa: it’s today one of the fundamental rules to be respected to avoid contagion from COVID-19. Given these directives, smart working is the simplest solution that companies and professionals can adopt to continue their work even outside the office, if it’s possible, for business continuity.

In this article Marco Cosatto, DevSecOps of MOLO17, will explain to you how to connect an emergency failover LTE gateway to the main router with VLANs to work from home.

Marco Cosatto's desk home
Marco Cosatto’s home desk

During the COVID-19 outbreak, we will see how to connect an emergency failover LTE gateway to the main router with VLANs, keeping an eye on the aesthetics with a 3D printed dock. Let’s see how to connect to the internet and work from home in complete safety.

#IORESTOACASA – the hashtag used by workers in smart working in Italy during the pandemic

As you might have noticed, here in Italy the outbreak took its toll.
And as you can imagine we, as MOLO17, have activated the business continuity plan and immediately switched to smart working.
We gladly embraced the #IORESTOACASA (that is “I’m staying at home”) movement. As you will have been able to see from our stories on instagram, our team is working from their home desks.

  • Instagram story about Marco's smart working
  • Instagram story about Matteo Granziera's smart working
  • #iorestoacasa
  • Instagram story with MOLO17 by Klodian
  • Smart working in sunny day with mac
  • Smart working with two monitor
  • The Harbor Smart working
  • Smart working by Giovanni with four monitor

Some of us were more prepared than others. I, for example, without the pretense of being a fully fledged prepper, I was a well prepared for this, at least speaking about connectivity.
If you’d like to know what i mean you can have a look at my series of posts

I always believed that home is where wifi auto-connects. As you can imagine, during this smart working exceptional explosion, internet connectivity is totally taxed by the number of people using it. Even if I already have two internet uplinks at home, I wanted to ensure my business continuity at “prepper” level.

LTE/4G connectivity as Network failover option for smart working

As I promised in a past article, I show you how to make my third uplink, an “absolute-failover” connectivity.

It’s a LTE/4G connectivity, obviously metered. I use it only when the FTTC and the WIMAX go down simultaneously, as explained before. 

When not home, I can take the device with me to use it outside.

Network failover with Netgear Nighthawk M1 LTE router

What I’m using? An high performance LTE router by Netgear, the Nighthawk M1. Now, as you know I’m not a fan of COTS devices, but this thing included everything I needed, including both a long-lasting battery, an LTE CAT16 chip (which potentially delivers 1Gbps/150mbps of download / upload speed), and an ethernet port. The ethernet port is what I needed the most: this way the device brings the hardware, I bring the professional networking features when at home. 

Now, during this outbreak I will stay at home, and I’ll really want the internet uplink to be as resilient as possible for my smart working sessions. So the device is always plugged in to the network as an uplink.
How did I make it work as an uplink to my existing router/firewall?

VLANs instead of cables

The first problem is that on the ground floor of myhouse there is barely any signal for it, but the firewall and the main uplink is located there. 

On the first floor, where my bedroom is, there is a switch that distributes the wired network for that floor. It would be a perfect place to plug it in, but since the main router is on the ground floor and I didn’t want to route more cables down the walls, I just did it with a dedicated VLAN.

Topology Diagram VLANs

Topology Diagram VLANs

The main router is connected to the main switch with a LACP, with four gbit network interfaces, both for resilience and bandwidth. 

VLANs instead of cables: the project 

 Here’s how I did it:

  1. I created a new VLAN interface on the LACP, tagged 200
  2. After that, I also created the new VLAN on both the switches: the main switch and the first floor switch
  3. In the meantime, I set all the ports to use the VLAN as TAGGED both in ingress and egress. The only exception is the port where the Nighthawk M1 has to be plugged, that is PVID 200/UNTAGGED 200, in other words “an access port on VLAN200”, without any other VLAN on it. 
  4. Later, I set the new VLAN interface, with DHCP Client enabled, as an uplink port in the router. Subsequently, I set the proper routes,  failover routes and groups. 
  5. Since the connection is metered, I created a whitelist of hosts allowed to use it.  So you can avoid consuming traffic on frivolous things, especially if you, like me, already have the two unmetered uplinks. I can always disable the firewall rule in case of emergency. 

Finally I also put an usb power cable in the cabinet on my bedroom floor. So I can come home, plug the device to the ethernet and the usb-c connector, charging and enabling my third uplink at the same time.

Bonus for paranoid preppers: if you are actually quarantined and in smart working in your floor, doing it with VLANs instead of real cables, on an already installed network, you won’t need any physical access to anything else other than the floor switch where you already are.

COVID-19 shut down shops? Here comes 3D printing 

Now I just need to make the thing polished and nice. There are some docks for the device, but you can’t go outside to buy one right now. During an emergency like this you don’t want to go outside for stupid reasons. To avoid contagion, for now it is illegal here in Italy.

I’m also a maker and I don’t mind using my 3D printer once in a while.

I found a nice model here, that is just perfect for my needs: https://www.thingiverse.com/thing:3891372

Time to fire up Slic3r and here we are:

  • Realization of slic3r: 3D printer STL Slicer
  • The 3D printed Dock on the desk
  • Testing the Dock on the desk for smart working at home
  • Dock in the final installation for smart working at home

Conclusions

I hope this article can help you with your home network and with your smart working sessions.  

#IORESTOACASA for the good of all. 

Stay Safe and good smart working!

Office 4.0: security and quality of life

As those following this blog remember, among the main events of last year there were the change of location of MOLO17 facilities and the consequent design and construction of our new 4.0 Offices. Regarding this, we also had the opportunity to participate in the dedicated “Office 4.0” event of 11/27/2019, organised by our local Technology Hub (Polo Tecnologico di Pordenone). Below is the video of the intervention.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

What is a 4.0 office

Referring to the concept of Industry 4.0 is clear: what has been done for the manufacturing sector, and for the secondary sector in general, was to make the structures and production machines connected and computerized together with their appurtenances. This interconnection, mediated by industrial IoT technologies, allowed to gather and correlate large amounts of data previously almost completely disconnected (even if not completely ignored) from the decision making process and its own now critical elements.

Analogies with Industry 4.0

Nonetheless, if we think about the entire flow of information, it is clear that the circle is not complete without digitising and linking the “back-office” part of the company to these data flows. Let’s imagine, for example, a stock monitoring and relative flow of purchase orders in case of lack of a component: it would be fantastic if warehouse operators could send the order to the purchasing office. However, if the operational flow of the purchasing department is not at the same automation and computerisation level it would end up acting as a bottleneck, if not even as a “hole” in the data collection process, which is instead a fundamental tool for decision making.

From factory to desk

Furthermore, even in the context of a pure tertiary sector, such as service or consultancy companies, they will certainly benefit from the integration and data collection in support to the decision making functions. Meanwhile, management control, as well as internal information flows, will become extremely more effective. Quite simply, it is a matter of bringing the experience of integration and correlation already experienced in industrial production to desks.

Improving of the employee’s quality of life

Maintaining the parallelism with industrial automation, the integration of IoT tools in offices can be, if done with the right criteria, an excellent tool to improve the efficiency and quality of work, as well as the quality of life of the employee. Functions such as adjusting the light with respect to the external brightness or switching off all systems automatically when the last employee left is no longer detected inside the building exemplify the ability of the environment to react autonomously to the metrics collected. They are all demonstrations of how it is possible to simplify and improve the management of the headquarters of their offices, both in terms of quality of life of employees but also and above all as management costs of the same.

You can read the entire story of the design of our office in this article. Now, however, I would like to spend a few words on the security implications that these tools carry.

Security issues in Office 4.0

Sicurezza e qualità della vita
Sicurezza e qualità della vita

Keep off from Commercial Off-the-Shelf components (COTS)

The first consideration on security to be done is to avoid IoT COTS tools as much as possible (which I personally try to avoid or isolate even in domestic use, as I highlight in these articles). Like in industrial IoT, the use of domestic tools in the workplace does not provide sufficient guarantees of security and reliability typical of professional tools.

3 points to keep in mind

In fact, it is really important to remember, as for every IT tool designed according to security criteria, to define at least three elements during this phase:

  1. security perimeter;
  2. security context;
  3. modello of potential threat(s).

By placing the device from the domestic environment to the workplace, we will significantly vary all these elements, thus making even the greatest design effort compliant with potentially vain safety parameters. This is without counting the inclusion of an IT system, in a connected context such as a workplace. Context that should involve the same safety assessments that are made in the design of the equipment itself, but applied to the entire office system by an expert professional.

Cloud or on-premises?

Another false belief that often hovers among SMEs is related to cloud solutions compared to on-premise.

This can easily be summarised with the sentence: “I don’t want my data in the cloud, because I want to know where it is, I will keep it here in my house”.

However, it is quite difficult to have an ISO 27001 certified datacenter on premise when you are an Italian SME. Potentially, with proper configuration and design, your data is more secure in the hands of a cloud provider.

In our case, seizing the opportunity of the change of our corporate headquarters, we opted to migrate all the infrastructures to the cloud, rather than physically moving them from one location to another.

Complete migration to the Cloud

The rationale that led us to this choice was carefully weighed and its winning points were mainly:

  • hardware activities are not our core business and these activities distracted resources from core projects, causing significant expenditure of energy, unforeseeable interruptions of development work for hardware interventions and all that we know how to achieve by taking charge of the own hardware;
  • TCO, however high it is, resulted still lower than a data center in the premises;
  • resilience and the ability to reconstitute the system, even completely from scratch, are unparalleled compared to an on premise datacenter: each AWS availability zone is replicated in a datacenter cluster that is automatically switched in case of problems. Manually or, through some technical measures automatically, it is possible to restore the entire infrastructure in a new availability zone or in a new city in a few minutes from backup or from scratch using the project template;
  • several security considerations, including:
    • physical access to our “datacenter” is objectively impossible, even by ourselves;
    • the only access to the datacenter is a VPN tunnel with the VPC (group of virtual subnets to which the virtual servers in the cloud belong), protecting data in transit;
    • disks of remote machines are encrypted with a key of which we have a copy, kept in a safe place.

This level of security, ease of management and resilience is really difficult to achieve at the same costs with an on-premises datacenter.

Weak points?

The weak point, if we can consider it this way, is the need to have a stable connection with the cloud, almost certainly equipped with second failover connectivity.

A look at what we have “on field”

On the “ground”, compared to the cloud, it is necessary to have a great deal of attention to network design, since this is the fundamental asset with which you will work.
In our headquarters, at infrastructure level, we have a network mainly based on Ubiquiti and Watchguard, with access points for client high density, equipped with spectrum analyser independent from the radio used for communications and IDS on board each.
This is a reflection of our flexible and mobile way of working, therefore access to the network is mainly via wireless network, which must however guarantee adequate levels of security and reliability.

Solutions for remote working

We then created a cloud vpn concentrator that allows access to servers from all over the world, while ensuring a much higher quality and availability of access to business systems than that normally guaranteed by SME datacenters and connectivity.
This goes extremely well with the fact that MOLO17 has full remotely working employees, personnel who work from home for most part of the timetable and similar situations. Therefore resilient, secure and fast remote access to company servers is essential for this type of business relationship.

Ufficio 4.0, sicurezza e qualità della vita
Lavoro da remoto

Managing Mobile and Edge

To facilitate the management of security issues and client management of this setup we have implemented various organizational and technical measures, for example:

  • the use of an MDM solution, specifically Cisco’s MERAKI, to send configurations to company computers and to assess compliance with security policies in real time;
  • the use of mandatory and forced full disk encryption by the MDM itself, to protect company data;
  • the use of a cloud solution, Google’s GSuite, for corporate email and centralized authentication. We access all corporate services through GSUITE accounts;
  • as company phones, we use apple phones, also managed with DEP, supervision and management via MDM;
  • For employees who want to use their own telephone terminal, we have implemented the “bring your own device“, via Google for Work, forced through the MDM. When you try to log in to your account with an Android device:
    • work profile is created in employee terminal. This profile is an isolated container that can be deactivated at any time by the same;
    • MDM allows its creation and consequent access to company data only if the phone has not been tampered with (as via rooting and / or custom rom) and has full disk encryption active;
    • in the event of theft or loss of the phone, we can delete the container remotely at any time, as well as in the event of the employee leaving the company. On the other hand, what is out of the container is opaque for us, protecting user privacy;
    • we can ensure data compliance by preventing copying out of the container.

It does not end here

In our search for flexibility we have not stopped here. We have also moved in the Cloud our telephone switchboard, specifically with 3CX.

In-Cloud VoIP PBX

In-Cloud VoIP PBX all employees have their own internal extension number.
Except for special uses, we do not use landline phones, but we use an app inside our mac and / or our cellphones.
We can of course disconnect from the switchboard at any time, divert calls and similar functions that are classically available on any switchboard.

Access with digital key

At any time, technically even in the middle of the night, employees can enter and leave the building, using bluetooth keys present in an application on their phones or in their work profile. Access is then recorded and the alarm is deactivated.

This allows you to work flexibly and at the most appropriate times for any projects that the various teams are following. Of course, we can invalidate any employee’s electronic keys at any time.

Is anybody in?

The building is able to react to the presence or absence of its “residents”, for example by observing wifi users it is capable to determine when the time has come to close the lights and activate the night alarm.

A win-win choice

Security in 4.0 offices can, and in my opinion, have to be perceived as a tool with double potential.

The most heartfelt example in the company is perhaps the implementation of the MDM for all devices and Work Profile on Android: not only do these countermeasures significantly improve endpoint security, but they give their user a positive counterpart that improves balance between working and personal life.

Route to a better office

I firmly believe that Office 4.0 is an unmissable opportunity to move the image we commonly have of IT security procedures and tools.

In the first place because we move from considering them sources of frustration to tools that, in addition to making us feel safe, can make us live better the working context with a strong counterpart on the quality of life for every employee.

The ROI of the solution

Last, all the costs incurred that are usually perceived as unnecessary by management become opportunities to lower the TCO of IT infrastructures and beyond. This also allows to optimize company procedures with all the positive consequences that derive from them.

Building Home Networks Like a Pro – 2 – The Router: Initial setup

The first thing you are going to improve on your network is your router, since in a small business network as well as in an home network, it will be on of the central components offering basic and sometimes quite advanced network services.

As a bastion host, don’t hope you will be using it as a NAS, it is a very bad security decision. NAS devices are not designed to withstand attacks, while a router and firewall devices is designed to do exactly that. Security and file sharing do not mix well.

As a home network, you will be making decision to improve the performance of entertainment services and online games, probably, while shaping down traffic from file sharing and such. Somehow, as said before, it is a mirror world from business applications.

Don’t expect this to be as cheap as a COTS device, but also don’t expect this first project will break the bank.

Choosing the software for our Router

As a firewall-router distribution, I’ve chosen for you PFSense as the operating system for our router. Or you can use OPNSense, a current spinoff of the project.
Nothing against any of them, PFSense is on the market since a long time. I have quite a long history of projects using it under my belt, OPNSense is very promising, very modern and I will give it a try ASAP, I really love it from the first tests I carried out. PFSense on the other end is something i used for a long time in production environments, so my personal experience with it is way higher. Most of this tutorial would have been similar even with OPNSense, so please feel free to give it a spin.

The Hardware

Netgate, the current owner of the project makes some fine hardware devices for PFSense routers, with the software preinstalled.
You COULD do that, sure, but where would be the fun!?

The Router
The finished product

We will be building our firewall-router from very nice embedded hardware. There are many devices and boards on the market that can be used with PFSense, even an old pc you have tossed in the attic. Just make sure you have a decent number of supported network cards, like 3 or 4.

I’ve built my home device on a PCENGINES board. There are some kits on a fine website, VARIA STORE, that are totally ok with PFSense based routers. By the way: if you are rich or paranoid or both, you can buy two and make a very resilient redundant setup with minimal effort, but this is beyond the scope of this tutorial now.

The router part list

  • the board (like an APU3C4 with 4gb of ram)
  • an SSD, a 16GB mSATA is ok. PCEngines do support booting from SD cards, you could install PFSense there, but don’t use SD cards as the root drive, really, don’t do that.
  • a power supply
  • a chassis
  • a temporary usb drive to store the install image
  • (usually very optional) an LTE/4G router or modem with ethernet port, or go for the internal add-on board for APU3C4, but I’ve never personally tested it.
  • (very NECESSARY) a serial to usb adapter with an RS232 female connector (or an adapter)

WHAT? Serial? Yes, what’s the point of an embedded system if you have a monitor connector? 😀

Now just assemble the hardware. Beware that some kits have heat-sinking plates that will be using the chassis as the final heat sinking elements. Don’t forget to use them/remove protective sleaves from them, since this will be a fanless device.

When it is assembled, just download the appropriate PFSense boot image.

Installing the software for our Router

You will need an AMD64 memstick/ISO image. NanoBSD images, even if they seem to imply that they are made for this kind of hardware, are now deprecated. Don’t use them.

To download an appropriate image for the described setup use the following options:

Download dialog of PFSense boot image used as the router operating system
Download dialog for the router OS image

Setup 101

Just flash it to the usb using WIN32Imager or DD or something like that, just follow this official guide.

Once it is flashed, plug it into the USB port on your soon-to-be new router.

Connect the RS232 between your computer and new router with the adapter and fire up your minicom. Putty or other serial terminal client of choice. Set the COM port to 115200bps, 8-n-1.

Plug the power supply to the device, you will see a text bootloader. Quickly send the appropriate key to choose the boot device and choose the usb. If no boot screen is shown, please remember that some APU boards have a default serial speed of 38400bps set in the bios/coreboot, so change your serial settings to that. As soon as the bootloader loads up the PFSense kernel the speed will be changed again to 115200bps, so please correct that again to interact with the serial console.

The installation will take place almost automatically, just accept the default for everything.

Shutdown, remove power, remove the usb and serial adapter and connect to the middle ethernet port with your pc, DHCP client enabled.

At some point you will hear a jingle from the pc speaker. The DHCP should be assigning you an IP address.

Now, just fire up a browser and connect to https://192.168.1.1 and login with admin/pfsense.

Adding uplinks

I will guide you adding uplinks first in this kind of setup, to let you make sure they work. That if something breaks down the line while defining VLANs and such, well, you will know you broke it, not your ISP. I know it is quite the reversal of what it usually done professionally, but in a non-pro environment this can save you a lot of pain.

Basic PFSense router options

When you first login to your new PFSense appliance, you are greeted with a wizard.

You will see that you already have two network interfaces configured, a LAN and a WAN. The WAN is configured to get its address from DHCP, while the LAN has DHCP server enabled. If you plug the operator’s CPE to the first ethernet on the left, you will probably be surfing the web.

Let’s give the interfaces proper names now.

Let’s add some more WANs

Add the next network port available (should be em2) like in the slideshow below.

Repeat until you run out of uplink services or ports on the appliance, excluding the LAN port, obviously.

Now go to advanced in the system menu. Choose the Miscellaneous tab.

Activate Use sticky connections and set the parameter to 3600 seconds
Activate Use sticky connections and set the parameter to 3600 seconds

What the “use sticky connections” does is simple: it loads balancing works by sending connections in round robin between active uplinks. If an inside host is selected for going out with its first connection thru a certain gateway, with this on it is guaranteed to go out with that gateway for all the subsequent connections, for at least 3600 seconds (or what you set on the parameter, expressed in seconds). Why? Because from the contacted host perspective you would appear constantly switching IP address. This will probably invalidate you http sessions frequently, login you out from services and such. This is at least if the link doesn’t go down, some lines below.

What controls that is this:

Enable this option for a faster reorganisation of the connections in case of gateway down
Enable this option for a faster reorganisation of the connections in case of gateway down

What happens, if you choose to enable this, is that all states will be reset when a gateway is marked is down. This can quickly make the computers resume connectivity when a gateway goes down, but it will reset all states even for those who were on the still functioning uplink. The choice is yours, but in my opinion enable it only if your connections are very stable and you require a very quick switchover between failing connections. Normally keep it off, the states will timeout on their own and new connections will go thru the right gateway.

Now that you know how the load balancing works, think about this: if you have two connections but they are not equal in terms of bandwidth, the connections will go evenly thru both. Can this be changed? Sure. In every gateway there is a parameter, the weight. Well hidden in the advanced settings. Click on Show Advanced.

Go to system, routing, gateways and edit the "bigger" gateway.
Go to system, routing, gateways and edit the “bigger” gateway

Weight is how much priority the connection has when used in a routing group where it has another connection with the same TIER-N number. It will make more sense soon, but think about it in this terms. If your VDSL is 3 times “bigger” than your WIMAX uplink and you will balance between the two, set the VDSL to 3 and the WIMAX to 1. This has no effect in failover groups.

Tweaking the DNS settings of our Router

Now it is time to tweak the final setting to the DNS settings.

Always remember the diagnostic protocol when a problem arises and DNS servers could be involved.

  1. It is not DNS.
  2. There’s no way it’s DNS!
  3. It was DNS.

So please be careful with those settings.

First things first, go to System, General Setup. Add some DNS you like, two for uplink you have and assign two of them to every outgoing connection and take note of which went where, but mix them up, so two DNS servers from the same operator do not end up on the same uplink.

Add more of your favourite public DNS. Be sure to untick the dreaded DNS Server Override below.
Add more of your favourite public DNS. Be sure to untick the dreaded DNS Server Override below

Save it and proceed. Now go to Services, DNS Resolver. Configure it this way:

Select all the outgoing interfaces you configured in the appropriate section and leave it listening on ADMIN and on the LOOPBACK. Disable DNSSEC for now and ENABLE FORWARDING mode. Save and apply
Select all the outgoing interfaces you configured in the appropriate section and leave it listening on ADMIN and on the LOOPBACK. Disable DNSSEC for now and ENABLE FORWARDING mode. Save and apply

Now the final tweak into the gateways. Remember when I told you to note down which DNS went where? Now add one of the DNS addresses to each corresponding gateway as a monitor IP.

The monitor ip is set to one of the DNS servers that are assigned in the General settings to that specific gateway

We do this because the default monitor IP is the default gateway, but it can be working even if the line is down, because it usually is a local CPE. The monitor IP is important because it is how the firewall decides to include or exclude an uplink from the specific gateway group.

Gateway groups

Finally, we are almost done!

Let’s define appropriate gateway usage policies.

Go into System, Routing, Gateway groups tab.

It is time to define how we want to use our gateways.
Every group is a potential usage sequence of the gateways, defined as a list of outgoing uplinks in priority order, called TIER-N, where the smallest N has priority.

What happens if you define a group with two connections on the same TIER? If it is the smallest N-TIER that has a connection alive, traffic is sent in a round robin fashion out of those links. The sticky connections flag you enabled before ensures that the same host always goes out with the same IP for at least 3600 seconds. Do not disable sticky connections. It can be done only if your provider allows you somehow to retain your IP address across multiple links via a NAT on their side for example, but this is a very rare occurrence even in business scenarios, I personally never saw that in a home scenario.

I usually define at least 3 groups (1 for priority to wan1, 1 for priority to wan2, 1 to balance between the two):

  • PFSense router gateways groups page
  • PFSense router add gateway details page

We defined 3 scenarios in the picture above. What will happen if we assign traffic to each gateway group is this:

  • FTTC_PREF = “Try FTTC, if down go to WIMAX, else go to 4G”
  • WIMAX_PREF = “Try WIMAX, if down go to FTTC, else go to 4G”
  • BALANCE = “Try to balance between FTTC and WIMAX, if one is down use the remaining, if both down go to 4G”

Gateway groups balance: never use it!

Last thing to do is that: which gateway group shall we use as default. NEVER use BALANCE. Unpredictably weird stuff will happen. Just choose on of the other two. Always use a group that gives a strict TIER priority. When you made up your mind, set the correct group here. Disable IPV6 for now, as always.

PFSense router gateways list page
Set the default gateway. You will notice that gateway groups are listed as gateways. Think of them as “routing destinations”

Firewall Rules

Now it is finally time to balance connections and packets. Go to firewall rules and select the ADMIN tab.

For many interface you will see you have many rules already in place. This is good, they are there to protect you from mistakes and evil guys.

On the admin interface you will see there are “antilockout” rules in place. You can’t remove them from the firewall rule list interface, you have to go to advanced settings to remove them, but don’t do that. At least for some time until you are confident with pfsense. If you remove them and lock yourself out while thinkering with the firewall ruleset, be prepared to use the serial cable again.

Create a rule using the up facing arrow, this will position the rule on top of the others.

  • PFSense router general rule's parameter
  • PFSense router gateways rules source and destination page
  • PFSense router gateways advanced options page

You should end up with something like this when saving, confirming and reloading.

PFSense router rules setup page
Final rules setup. Don’t use in production, this is just an initial balancing test

Now check the firewall, NAT, outbound tab. It should be set to fully Automatic. Leave it that way for now. It won’t stay long that way, but for now just leave it as it is.

PFSense router outbound NAT page
Outbound NAT in automatic mode

Bonus. Click on the PFSense logo on the top left corner to return to the dashboard and add this widget to monitor the gateways.

  • PFSense router widgets gallery
  • PFSense router dashboard Gateways widget page

Many widget are there tempting you. Do not add too many of them. The hardware you are on is limited and it is best used for network routing, load balancing and firewalling. Some of them are really resource-intensive.

This is it. Your devices should be balanced 3:1 between your FTTC and your WIMAX, or whatever you connected to your brand new PFSense home made appliance.

The reasons for having multiple uplinks to the router at home

As you can see, I assumed you have more than one internet connection at home while many businesses still don’t have a redundant internet connection.

Am I crazy? I will explain why I’m not. Sometimes a redundant internet connection at home is even more desirable than at the office.

Why you should consider having it

Dedicated business connectivity usually has business-oriented SLAs for restoring a faulty link, link quality and traffic priority is way higher than those of home internet connections. I personally saw MOLO17’s main uplink failing two times totalling a whopping 3 minutes downtime in one year. Sure a critical event would take more to recovery, so a second uplink is always desirable in a business setting. But anyway, the quality of service and uptime are excellent even with only one link.

Sure, if you get your commercial operator’s generic offerings for “business”, things usually get worse, but never as bad as with home user-targeted connections. You should always let a professional design your network and buy a “naked” dedicated internet uplink from a pro-grade ISP. By the way, drop a line to our sales team, if you want us to help your business with all this.

Anyway, comparing all this with my home connections, my home suffers something like 30 minutes downtime per connection, per month.

Two connections also means more bandwidth available

Having two connections can also really help you with bandwidth if proper load balancing is done. As I probably mentioned before, I live in an old big house that was built by my ancestors and we always lived there throughout the generations. We divided it in different apartments for each floor, so that we can share resources while maintaining our privacy, but the day my parents discovered Netflix was a hard hit on the total bandwidth.

Stay away from critical events

Also, speaking of critical events and downtime. One day a car crashed against the mini DSLAM that terminates our VDSL line. It took a week to have a new mini-DSLAM in place. During that week, Netflix was sometimes sluggish in our home, but we still had more than decent connectivity, because the second WAN is a WIMAX antenna, sitting safely on the roof, three floors away from the crash. Should it go down, the 4G/LTE connectivity will kick in as last resort.

What about planning to live in a smart home?

With many services, like IoT, building automation, even security systems, as well as network streaming rapidly taking the place of cable/satellite TV, it is nice to have a backup uplink. In normal conditions you will still use it to gain some additional bandwidth for Netflix and such. The Internet is quickly replacing other services, or to be more precise, wrapping them as the default delivery medium, so I don’t feel very strange wanting to ensure that connectivity is almost never lost even (or especially?) at home.

Choosing the second (and third?) uplink

If you are still reading, you either want to know how deep my madness goes or you are starting to believe that you need a second uplink for your house. 🙂

Choosing your second uplink is very simple:

  • 1. Pick different technologies for the physical medium;
  • 2. Pick different operators (really different, if possibile, with a different uplink path to the internet).

For example, in my case, the main connection was a wired one that existed for ages. I was running my first 9600bps modem on that same cable. Time passed and we upgraded the line to an ISDN, then to ADSL, then to VDSL, FTTC. We are in a rural area, so there is no FTTH at the moment.

A radio connectivity can be a smart decision

When I started to think to upgrade the connectivity with a second line, I went with a radio provider, never thinking in getting a second xDSL line, because… Well, the car event made it abundantly clear. Most operators share the same cabinets or at least are one near another because they share the same underground tubing. If something bad happens to one uplink, you’ll want the other one to be as unrelated to that event as possible and as far away from it as possible.

Even if you see in your street two cabinets from two providers one far away from the other, don’t assume that they are unrelated. The day an excavator down the road will break the connection for both of them, you will find out what a web of delicate connections unfolds from those small cabinets under our feet. A brief moment of enthusiastic awe for human engineering will quickly be overwhelmed by the sadness of being the lucky paying customer for two offline connections. It’s the true story from a friend who didn’t listen to me.

So basically just get a wired and a wireless connection.

What if there’s no way to get a wired connection to our router?

But what if you are so unlucky that no wired internet is available at your house, or its quality is simply not worth it?
(For non-Italians reading this: yes, this still happens an awful lot here).

In that case I suggest you to get a wireless unmetered uplink, like a WIMAX/HIPERLAN and 4G/LTE ethernet modem with, usually, a metered contract. In that case you will probably be using the metered line as a pure failover line, not as part of a load balancing group. If you are lucky enough to be able to get an unmetered 4G uplink, then you could also use it for balance, just keep in mind that latency is often an issue with 4G/LTE.

In general if you don’t plan to balance because you don’t feel your bandwidth is low, but you only want a failover connection, a good LTE modem or router is generally ok.

Should I consider to put a third uplink?

Using, like myself even a third line as an absolute failover is somewhat an overkill, I agree with you on this. But at some point I will explain you how I’ve done it, and you will see that it is a nice to have additional feature. Just a little spoiler: I have a mobile router that always carry with me. When at home and I dock it in its “home made cradle”, it acts as a third failover, while when I’m outside it is my main “road warrior” connectivity. But I will cover this in a future post!

Conclusion

Just make sure to follow this blog for the rest of the PFSense tutorial. Next time we will deal with the internal networking.

This tutorial is part of a Series.

The Lighthouse and the Pendulum

or “How We designed and brought to life the electrical brains, nerves and muscles of our new Headquarters while the clock was ticking

A quick video message from the protagonists

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

The story so far…

It was the late afternoon of a relatively calm day at MOLO17, when Mr. Angeli called me.

The Assessment

“I will send you my position, can you get there now?”

“Sure Daniele, can you give me some more information about the task I’m doing there? Shall I bring diagnostic tools or…?”

“No, just come here, it’s just a quick assessment”

“I’m on my way.”

As I got there, I quickly identified my CEO’s car and I parked near it. As I left the car with my tool-ridden backpack (I know how fast a “quick assessment” can turn into a major multi-disciplinary security assessment with some pentest-like recognizance) I saw Mr. Angeli waving at me while standing in front of a beautiful but very neglected piece of architecture. 

The new Headquarters

“Hi Daniele, what is this place?”

“With all the probability, you are looking at our new headquarters”

“Wow!”

My expression included both the awe for the beauty of the place and the worry about all the marks the time left on the building. 

MOLO17's headquarter today
The new HQ, as it looks today

A jump into the ’80s

We entered our new headquarters. It was like passing through a wormhole directly connected to the mid ’80s. Everything, dust and dirt aside, was frozen in time. The fake plastic plants did in fact contribute to that feeling with their perfect appearance despite the years of abandon.

There were no electricity, no heating implants, no air-conditioning and no internet connection.

Most of the stuff inside was seemingly coming right out of a MacGyver or Knight Rider episode, both for the yellowish tone every piece of plastic had acquired with age and for many old-but-very-high-tech-for-that-time technological artifacts, like the glass doors with key card readers on them and the IBM-branded network sockets mounted into brown connection boxes.

“I guess you want me to estimate and plan the new IT infrastructure, is that right?”

“Not only that. You said some of your past experiences were in building automation. What I would like here is an intelligent building with building automation, easy access at every hour, day and night by employees and all the technology you can imagine to reduce the hassle of managing it. And the most common tasks should be doable on a mobile with specific apps. You think you can do that?”

“That’s for sure Daniele, no problem.”

I always get too excited with new and unusual projects. Let alone being part of making MOLO17’s new “home”. This usually makes me forgetting about asking about trivial stuff like deadlines and such…

State of the Art

I opened one of the network sockets. Very old and worn CAT-5 cable. It was surely cutting-edge at that time. In what was and would be the server room, one wall was basically made of ISDN NT1 boxes, a clear sign that intelligent life has existed inside the building after the ’80s, but not for long after that. Various other boxes were riveted on the walls. No trace of a rack or anything, only a bunch of cables coming out of the wall, probably cut away from a patch panel. 

IBM Sockets, with the bunch of cables

The Basement

In the underground basement, there was a big door, labeled “research and development”. After lock-picking the aforementioned door due to the missing handle, we found the remnants of a small network rack, with a whopping-10mbps network hub inside (yes, hub), with coax and AUI uplink ports. On the walls more ISDN NT1 boxes, along with many fuse / breaker boxes for the electrical wiring.

Electrical implant needs to be replaced. Entirely.

Electrical breaker boxes are not my main field of knowledge, but the panels around the building were obviously way too old to be used today according to current regulations. 

“Well, Daniele, this is going to be a lot of work, both for designing the system and to configure it, let alone we are going to need electricians and such to make new electrical and network wiring from scratch.”

“We need to move the headquarters here in 25 days from now. At least basic services must be up and running by that day”.

“So we are going to need a huge workforce or a miracle. Probably both. But I’m confident we are going get at least one of the two somehow. And probably we’ll also manage to have some fun in the process”

Last time I took part in a team behind a building automation project made from scratch on a new building, it took at least 20 days just to get to a point where all subsystems where designed, mapped on paper and every contractor was aware of what was to be done on their side. The whole thing was powered up the first time almost six months later.

Planning the Miracle

We were in a hurry, apparently, to leave the old headquarters. 

On the way to the soon-to-be-former headquarters, I started making up my mind on how to make said miracle happen and how to control the possible damages if such miracle would be, even partially, unavailable.

By the time I parked the car, this is what was clear:

  1. All designing work was to be done ASAP
  2. We had no guarantee that the contractors were aware of every building automation system in the world, so selected technologies would have to be chosen between those that are generally easy to explain to electrical wiring-aware personnel, and at the same time all the technology to implement had to be something at least two of us would be perfectly confident using and configuring
  3. A clear overview of the final system was mandatory, but it had to be broken down to independent modules so that if something wouldn’t be ready in 25 days, well, it’s bummer but it wouldn’t be a total “blocker”.
  4. Power supply, “Life support” and network were the top priorities.
  5. The more I was thinking about the process of moving physical servers between the two buildings the more it started to look like The Recipe for The Disaster, because of the switchover time between two different fiber uplinks, the operation of dragging tons of metal around in an hurry, the possible security problems leaving the servers unattended during the construction and renovation works, let alone possible damages in transit to hardware and data, ip changes and such. What a better day to finally complete the process of migrating all the production systems to the cloud and to create our HQ datacenter even before installing a single rack unit inside it? Only test systems would remain local. Being test systems, the final users wouldn’t have experienced any downtime, no sensitive data is being moved on the roads and if a server breaks or arrives to destination late, it won’t be a disaster.

I firmly believe that the last proposition saved us all at a certain point. More on that later. 

Requirements and Materials

I started building the network diagram.

“Daniele, are you really sure that we are going to go with wifi as the main connectivity?”

“Yes, we must be able to use our computer around the new HQ without any kind of tether

“So basically you want the cables to exist mainly in the datacenter and for specialized tasks. I’ll try to find appropriate products in respect to that, but please be prepared: the density of wifi clients in the open space is going to be, well, high. This means using specialized equipment for high density”

“No problem with that”

Testing the core network infrastructure on a table with Mr. Chinazzi

Wireless-first approach

So we decided to build system around the idea of “wireless first” as client access mean of connection, and because of that we opted for the use of Ubiquiti Network’s UniFi SHD APs everywhere, due to the implementation that favors high density of clients. The added benefit of using UniFi equipment is that we can now manage most of the network settings directly from a mobile phone, PoE switches included.

The wifi controller for UniFi also doubled as a small DVR for the cameras of the same brand, with the same added benefit of viewing the footage directly on the phone or other devices that support the app. So we decided to play along with the convergence strongly suggested by the brand and install cameras from them and we are not disappointed so far.

Firewall

For the firewall, we went with our classic choice, Watchguard, due to our experience with the brand and the dependability it showed through the years / tried and tested compatibility in our scenarios. 

Access control

“I think we should go with NFC cards for physical access, so everybody can access the structure at every hour necessary as you wanted”

“Ok Marco, but I don’t quite like the idea of cards. It looks old. Isn’t there something that I cannot forget at home?”

“Well fingerprint scanners, but you know, after the GDPR, biometry on workers is a dangerous thing…”

Something that uses phones maybe? I mean, nobody forgets the phone at home”

“Never had to do with such a thing but I believe somebody already made such a product, let me check”

“…and for the doorbells?”

“Those will be internal numbers of the PBX that will call a specific ring group or queue during the day and another during closing hours, so you can answer the door even if nobody is there”

Can we make the door intercom speak and explain that so the caller won’t be confused?”

“I believe so”

The main gate’s intercom

Cutting-edge Intercoms

Basically we ended up with doorbells / intercoms from 2N, with bluetooth and relay modules, that the PBX uses to play music and speak while you are waiting to be connected to a local or remote operator after ringing the doorbell. The operator can activate the relays to open gates and doors while looking at you via the integrated camera and/or the other cameras around the building. The bluetooth module is used to recognize the users, that we can invalidate at will from a database, with time period based permissions (for example external service personnel can only access the structure at certain times, while full time employees can access it as they want).

Access gate

We also integrated every access gate with HTTP APIs with the building automation functions, so we can use the gate as a controller accessory inside time or condition-based scenes. For example opening the gate at 08.30 during working days for our workforce to find it already open in the peak hours, and closing it back at 9.15 when most of the workforce is already inside, both for ease of use and power/wear saving on the electric motor

Turn-on the Lights

“Daniele, are you still of the idea of a full fledged building automation?”

“Maybe, but we can always implement it later

“I beg to differ, if you want to control every LED light in the building with IoT / building automation, it is now or never. I know that the building must be completely rewired even in the electrical parts, so. This is the perfect moment: the wiring for building automation is kinda different from what they are going to do now, so you would end up rewiring everything again in the future”

“I see… Did you already found a supplier for the building automation parts?”

“Yes, I sent you a document with the part list some minutes ago…”

Daniele checked the list- “Ok then. Order what’s needed. What technology are we using?”

“Z-Wave, basically every relay is a node of a radio mesh, with a “special” node that is the controller that will be put in the server room”

“Marco, please make sure that nobody has to get up from their desk just to turn on the light in their room ok?”

“Ok… that’s a bit weird but I think we can manage…”

“Also, I don’t really want to see those ugly IR remotes for the air conditioning. Come up with something else.

“Ok… I think I just saw something that you might like… What about a big red button that controls everything in the room? You press it one time for the light, two times for the air conditioning, three times for something else maybe… You can have up to five presses sequences per button”

“Is it wireless?”

“Sure, we already agreed on the “wireless first” principle!”

The Big Red Button

A box of about thirty Z-Wave relays, a Fibaro z-wave controller, some IR blasters and a bunch of motion-light-temperature sensors and door opening sensors, along with The Buttons, quickly arrived at the soon-to-be-former HQ, just in time to reach the electricians at the new HQs. I jumped in the car to bring the stuff to them.

Bad news and the avoided failure

As I got there, some bad news were waiting.

“We measured the routes from the gate to the server room and some other places that are on the diagram here” – the installer shows to me my diagram with his notes – “the existing tubing is very, very long from here to there, I believe way too long for network cables”. 

“Yes, 150 meters are way too much for a PoE cable”. 

I opened my laptop and quickly updated the network diagram to add a second network rack in the basement and showed to the installer. This seemed to work.

“Daniele, we need to order another PoE switch and another rack with accessories, but just a small wall-mounted one in this case”

“Why is that?” 

“We just found out that the routes from the server room to the most distant nodes are way more distant than what we imagined from the outside, the tubing takes some weird routes from the server room to the gate.

This is the problem with old buildings, you didn’t design the tubing for the wiring, but you have to design the wiring and then adapt it to the tubing.

Another bad news broke in.

“It’s a long way to the fiber if you wanna rock and roll”

FTTH cabling
Direct FTTH link to the server’s room, about 4 km

“There is no internet connection in the area, to route the fiber connection to the new HQ, the internet provider needs to procede with a very long excavation work, but the permits from the Municipality require time to be granted. Probably the fiber won’t be there in time

“That could be a big problem. I will come up with something, don’t worry Daniele.”

No fiber… no party! But there’s plan B

And this is where “condition 5” of the above guidelines I gave myself probably saved us all: no fiber means no public ip. No public ip means production systems offline, if they were already being moved to the new HQ instead of the cloud. Since we migrated everything to the cloud, even the ERPs that run on Windows, converting them to terminal servers, along with our PBX and our main storage migrated to Google Drive, we just needed an uplink to our brand new concentrator in front of our private cloud, while final clients-faced services were all already there long before moving.

As soon as the main network rack came to existence, I fetched one of our spare mobile data SIM card, an LTE/4G USB adapter and a couple of high-gain LTE antennas and plugged everything to the Firebox. Then I just created an application control ruleset that only allowed critical services through the USB device, and there we went online, albeit slowly, way before the official opening.

Good News everyone!

The project was starting to take shape in the physical realm. As the electricians installed a group of relays, I was following them adopting the devices in the integrator. All in all everything was smooth, with only some minor mishaps, like when a too high load was plugged into the relays, effectively melting the solenoid inside them in a cloud of gray smoke.

Network cables were all in place and the APs started to appear around the HQ, broadcasting their SSIDs. Finally we were online, at least for essential services.

Controlling temperatures

The HVAC system was installed just after that and with the help of the IR blasters, it was integrated into the building automation controller, to allow scheduling and reactions to outside events.

Since Tomorrow.

We have always been on schedule, albeit on the edge of it, thanks to careful planning, inside help from many volunteers from the MOLO17’s staff and some fantastic contractors.

We are still implementing new technology and fine tuning the existing automations, for example since a couple of days ago the whole system is HomeKit-enabled, with a custom gateway made from opensource software, custom plugins for it, an embedded APU2 board and our sweat and blood.

That is one of the serious advantages of undertaking such an endeavour with in-house resources in an IT company. You will get it exactly the way you wanted it. I’ve seen tons of home and building automation projects go terribly wrong because of that: someone sells you the building automation package, the contractor installs it, and that is what you get, frozen in time. Without any chance of improvement or update over the years, to the point that COTS IoT devices will often surpass it in features by the time the contractor has finished installing it.

Instead, here we will improve it, day by day.
This building is and will be alive and evolving with us, with cutting edge technology, day by day.

Building Home Networks Like a Pro – 1 – Planning Risk Zones

This tutorial is part of a Series.

As you would do for any kind of project, you should always plan ahead: having a clear overview of what you want to achieve to the smallest detail will limit the “whoops” moments.

Home network policies: the Mirror Universe

As a network professional, my first attempts to configure my home network sticking to the books and best practices were failures. The problem was simple: even if I am a professional and I was using professional-grade equipment, a home network is not a business-oriented network.

All can be summed up in: the Mirror Universe, where everything is upside-down. You won’t want to block online gaming, you’ll want to prioritize it. Same applies to Netflix over other HTTPS sessions. You’ll want your iDevice to be able to talk to the AppleTV on the same broadcast domain, you won’t want to isolate it as your would do with BYODs. And so on.
Since professional equipment is built with offices in mind, this means that sometimes you will end up doing very weird setups, where the settings in some section of the configuration interface will be the exact opposite of what you would do in an office scenario. And sometimes you will have to improvise with a couple of dirty tricks.

First things first: what services are you and your family / cohabitants / guests going to use and what services you are going to serve to them?

Some quick ideas, some of which I will cover for you in this series:

  1. A border gateway/firewall that deserves those definitions (which your COTS uber-overpriced-gamer-router is not, from my perspective), with IDS, multiple network interfaces/VLAN support and such
  2. Prioritized online games / VoIP / video streaming services
  3. DNS caching and filtering for faster browsing as well as for your privacy and security
  4. HTTP(s) caching proxy for faster internet and less download traffic on the line, as well as for bad content filtering
  5. A family friendly subnet for your kids where all filtering takes place
  6. Decent WIFI solution, and by decent I mean a centrally managed WIFI network with a controller
  7. 4G / WIMAX / VDSL failover and balance
  8. Properly insulated and regulated guest network
  9. Local Video archive and streaming to smart TVs and set-top boxes (or smarter things like Chromecasts, AppleTVs, XBMCs and such)
  10. Audiophile-grade-capable multiroom music distribution system
  11. Security Cameras
  12. Home automation, the safest possible way
  13. VPN link between you and your best friend’s house
  14. Off-site backups for yours and your friends important documents and photos between your servers over VPN

While you are thinking about what you could achieve with your future network, are you sure you know the basics? I strongly suggest you my colleague’s great TCP/IP assay here!

First off: I love VLANs. Really. They are one of the best things pro-equipment can offer you. Some of you are scared of them or see them as a nuisance when plugging stuff around the house. I promise you this: if you plan-out your network with proper reasoning, you won’t even notice them.

Start by grouping your services and clients by risk zone. How dangerous would be a compromise of a certain device? How would it impact your home network/privacy? What if someone could use it to pivot around and access the devices into the same VLAN? These are the main questions to help you create your list of VLANs and devices. For a normal household you will probably end up with something like this:

  • Admin network: mandatory, where you will expose all the configuration interfaces of all your core devices (switches, routers and so on)
  • Computers, clients and media devices: mandatory, this is the main network, where your internal WIFI will go. Purists will object that media devices should be on their own VLAN, but the problem is that many media devices (I’m looking at you Chromecasts, Airplay nodes, Roon nodes and so on) make heavy use of discovery protocols based on the broadcast domain or such, see Bonjour / Avahi, for example. This means that if you split them from your cellphones they simply become paperweights. What I suggest you not to connect to this VLAN are smart TVs and COTS IoT things. They have an history of attack vectors that they will require a VLAN of their own.
  • Internal Untrusted Things: Kinda mandatory. I put here the things that I do not trust, like COTS IoT scales, toothbrushes, air conditioner controllers. Basically all the things just need to phone home to be available in the relevant app or cloud. There is no need to give them the possibility to exchange data inside the network.
  • Building automation: I’m talking pro-level building automation, not the COTS IoT things. Like Z-Wave / KNX controllers. They deserve their insulated network, since they are connected to critical systems in the house
  • VoIP: If you want a dedicated intercom + landline phone setup possibly with WIFI doorbells, you’ll want them on a dedicated VLAN, both for security switching / routing priority assignment with QoS and traffic shaping.
  • Security: if you want your set of CCTV cams over ip and / or you have an internet enabled alarm system, this is totally mandatory. ù
  • Guest network(s): Least but not last, one of the most important VLANs, the one to share with your friends, because “home is where your WIFI auto-connects”. Why the plural? You will find out in the next episodes.

Next time I will give you some examples. Until then, share your love for VLANs and start dividing your network by risk zones!

Building Home Networks Like a Pro – The Series

As a computer science and network security enthusiast and professional, a retired semi-pro-gamer, a passionate amateur photographer and, on top of all that, MOLO17’s Lead System Engineer, you can imagine my frustration when dealing with the typical home network (and unfortunately the typical small office network). If you can’t imagine it, let’s just say that I wasn’t able to tolerate a vanilla home network in years and that my home network is currently built with the same components I would install on a mid-sized business, with the same security and quality standards.

Purpose of the series

This tutorial is part of a Series. With this series of articles I will give you my take on designing and building a home network. The same principles of course apply to SOHO networks, with the proper changes in proper places.

A tipical home network scenario

The typical network scenario in a home network is a bunch of WIFI devices competing for the always starving resources of:

  • A single (and possibly suboptimal) connectivity
  • With limited hardware sometimes imposed by the ISP as a CPE (customer’s premises equipment),
  • With very limited QoS policing (or not present at all),
  • With limited hardware sometimes imposed by the ISP as a CPE (customer’s premises equipment),
  • there isn’t real firewall or IDS/IPS on inbound connections
  • No form of threat management on outgoing connections
  • Internal communications not policed for security and QoS at all at layer-2, let alone layer-3, and consequentially no VLAN segmentation of any kind
  • Nothing that can be called monitoring / diagnostics by a reputable professional

Common causes of bandwidth disruption in your home network

This means that as a gamer my ping time is simply destroyed by Windows Update, Mac AppStore and Linux APT/YUM, Netflix, Amazon Video, and such.

As a photographer, my off-site RAW photo archive backup can’t run late in the evening, because it might disrupt my Netflix viewing session or, far worse, my gaming performances, let alone the days I come home with a huge amount of freshly taken pictures (sometimes as far as 200GB in one session) and I have to archive them on a NAS from my PC.

As a generic home user, that Prime Video session might be hampered by any of the above, even my own computer’s silent updates running in the background and, even worse, my network won’t offer any layer of protection at all from malicious links, scammers, identity tracking services and other threats that do not attack from outside to inside. A basic degree of protection is assured by the NAT translation, that is for sure, but most threats of the modern world do not come from open ports on the perimeter. As a paranoid security expert, I won’t even think opening (or dst-natting, for a better term) a port on the perimeter to an internal network without any kind of VLAN segmentation and an IDS/IPS in between.

Willing to share WiFi password with friends?

And what about sharing your WIFI password with friends without a properly isolated (at VLAN level) guest network? And without the ability to impose restrictions on their behavior? This is some doomsday scenario from my perspective.

Commercial IoT components: beware of that

Another big problem is COTS (Commercial Off The Shelf) IoT today. It is not the industrial type of IoT we are dealing with in MOLO17 on a daily basis. COTS IoT is often posing security threats to the final users, or at least it pierces thru most of home routers to “phone home” (or cloud, for a more recent term) and become a puppet for its maker to control with or without your consent.

Security first even in your home network

As a paranoid security guy but also a home automation enthusiast, this poses an interesting and omnipresent dilemma that I always have to come to terms with while setting up home automation devices on my network. I will explain how later on this series, but let me anticipate that this does require a certain degree of configurability on network hardware that your home router doesn’t provide.

Conclusion

You might be already thinking: “great ideas and principles, but the costs for a home network built with enterprise equipment are completely unjustified”. Well: you are totally right, in my case it is done due to my passion for networking, with a relatively big investment on the hardware side.

Plenty of cost-effective and smart suggestions for your home network

What I will propose in the articles of this series are instead relatively low-cost solutions for the normal household that can introduce the same features at a much lower price. The trade-off? You will pay with hard study and knowledge required to operate the components.

I will propose different devices and brands, used along with open source solutions, that have a very interesting price point for the home / SOHO environment, including:

All of this without any endorsing by the respective brands, but only with my personal experiences and preferences.

Stay tuned for the next episode of Building Home Networks Like a Pro!

Quick Tech Tip: Keeping original HTML formatting in iOS Mail Signatures

This is one of the most inexplicable features on iOS: have you ever tried to paste your carefully planned and designed corporate HTML signature into iOS Mail? If you did, you probably had a very disappointing experience.

This is the result you will achieve in the end:

iOS signature in settings app
A broken signature.

The process to get this working the way you probably want it is very simple:

  1. Go to your sent mail, and select the signature you already used on your desktop
  2. Paste it in the Mail signature settings in iOS
  3. Above all, surprisingly, the most important part at this point is that you have to shake the phone!
  4. After shaking the phone, press “undo” in the popup menu
  5. Problem solved!

Enjoy your signature the way it was meant to be!

This is a really weird behaviour. I still cannot understand the reason behind this, albeit it might be connected somehow with the attempt to remove possibly incompatible formatting with some web client. The weird thing is that this feature looks like a hidden cheat code from an old videogame more than something related to a typical business feature. That being said, we never encountered further problems with this technique.