|Novel Approaches to the Monitoring of Computer Networks|
|Prev||Chapter 9. Conclusion and Future Work||Next|
There are two obvious extensions to the work that has been covered in this project. The first is to improve the mapping of networks at layer two, and the second is to examine the possibility of using neural networks to test network services. Both these possibilities have been discussed to some extent in earlier chapters.
One of the greatest problems with the network mapping approach taken by Section 4.2, and consequentially, the determination of logical location, as described in Section 6.1, is the need for a large amount of seed information. This data is required in order to allow the system to distinguish between network infrastructure (switches, routers, et cetera) and hosts on the network. It would be useful if the system could automatically make this distinction.
There are two possible methods for making this distinction: by examining the organisationally unique identifier, or by using the simple network management protocol.
The IEEE publish a list of organisationally unique identifers, and regulate their assignment and use. Several policies exist that both define and make recommendations about how these identifers should be used. There is a possibility that network infrastructure can be separated from network interface controllers by examining the OUI. Research into the IEEE's assignment policies will determine whether this is possible.
It is more likely, however, that vendors group together various classes of network infrastructure, in much the same way that CIDR allows network administrators to aggregate network blocks. If some method could be established to determine these aggregations, and in particular which OUIs or aggregated MAC address blocks correspond to network infrastructure, the MAC address of a particular device could be used to make a decision on whether that device corresponds to infrastructure or a host.
The second approach that could be employed is the simple network management protocol. The SNMP MIB-II defines a system.sysServices variable which is intended to indicate the layer of the OSI reference model at which a device operates. In theory, this could be used to determine whether a device is network infrastructure or a host. It is not entirely accurate, however. Unix-like operating systems are often employed at layers three or four as routers and firewalls. Their SNMP agents will report them as operating at layer seven, however, since the operating system is capable of operating at this layer.
Some scope exists for investigating these SNMP agents and discovering whether useful information can be extracted from them in order to use them to determine whether a particular device should be considered network infrastructure or an end host.
Section 7.3.2 looks at the problem of testing that various network services are functioning correctly. It takes the straightforward approach of providing a set of test routines for common services, and using a simple TCP connect to test those services for which there are no existing test routines. This is often unreliable, since the ability to connect to a particular port gives no indication of whether the service running on that port is functioning correctly.
Ideally, a network testing system should understand all protocols and be able to test that they are functioning correctly using these protocols. This is not realistic, however. A more sensible approach would be to create a system that is capable of learning new protocols and using this learned information to test services.
The use of neural networks to achieve this was proposed in Section 7.3.2. It remains to be seen whether neural networks (or indeed, any other form of intelligent automata) could be successfully employed in this field, and this leaves scope for future work in this area.
The fields of network monitoring and network management are large, and a great deal of scope exists for research and experimentation within these fields. This project chose and focused upon four specific aspects of the field of network monitoring, and examined them at a detailed level. There is room for expansion within these aspects, as the two previous sections outline. There is also, however, scope for work outside of the confines of what was done in this project.
Even within Rhodes, several other problematic areas of network management were identified. For example, the issues associated with maintaining up-to-date configurations on large numbers of layer two and layer three switch devices, particularly considering that in the University environment, these devices are sourced from a variety of vendors. An ideal solution would be a single management interface that automatically generates appropriate vendor-specific configuration files for the various devices located around campus.
The centralised, automatic configuration of devices is in itself a large area for research. Take the case of the University's firewall, for example. Students, staff, departments, and individuals have a vast variety of expectations and needs from the firewall. Those people who are less computer-literate expect it to protect their machines, and indeed it is required to, since often these people do not adequately maintain their machines. On the other hand, those people who are more knowledgeable about computers find the default firewall configurations restrictive, and often argue that it prevents them from being able to make use of certain facilities. As a result, exceptions are made for specific machines.
Problems arise when trying to manage these exceptions. Currently these exceptions are created by hand by one of the University's systems administrators. In addition, there are no facilities to expire the rules, so they often remain in place long after they are needed — this creates unnecessary security risks. It would be useful if this task could be automated to some extent. For example, users could be allowed to select from a number of typical options (such as running a personal web server) via a web interface. These user-selected options would have a default lifetime after which they would have to be renewed or they would expire and be removed. Any atypical requirements would still be referred to the administrators, but in a well configured system these would be few and far between.
These two examples present some idea of the scope that is available in these fields.