While there is nothing inherently wrong with using static IPs, there is also rarely an advantage that outweighs the extra steps of configuring them.
You would want to confident that:
- you'll never want to connect the device to another network - rules out laptops and any wireless interface (unless part of infrastructure)
- you'll never need to renumber the network (for example to make a site-site VPN link)
In cases where you want something to have the same IP each time (e.g. for port forwarding) DHCP reservations provide for that while still allowing you to see / set all your IPs in one place.
Where you mix static IPs with still having a DHCP server you have to keep notes of the static IPs separately to avoid address conflicts. I've seen a few other workplaces where no-one was tracking them and would have seemingly random connection problems depending just on who was in the office that day.
For example I manage multiple networks and for the office only the routers and servers (DHCP / DNS / file / print) have manual static IPs, the rest are DHCP with reservations for switches, access points and printers.
It's worth remembering that DHCP is about more than just giving each thing an IP address. It also can supply parameters such as the DNS servers to use, the default MTU for the network and less commonly on purely home networks, WINS for computer name lookups, NTP for time syncing.
Say you want to change all devices from the ISP DNS to Google or OpenDNS or to use the router's DNS forwarder, you can change it in one place and it will be picked up on lease renewal.
Nevertheless I can see why people might use them for a very small network (say up to 5 devices).
Except when you need to be able to connect to a device, without worrying if the DHCP is working, and has handed it the correct IP address.
Set a static IP, and plug the thing into a switch and you will always know what it's IP is. Use DHCP reservations and plug the device into a switch and you only know it's IP if the DHCP server is working as expected, and every network segment between the device and the DHCP server are working as expected. This makes the DHCP server a single point of failure (and trust me you do not want more than one DHCP server on a network). And this can bring a network down.
Anything upon a network that acts as a server of any description, web servers, DHCP servers, Print servers, file servers, LDAP servers, AD servers, etc etc, should always have a static IP that is independent of any other networking hardware.
The separation of static and dynamic IP's should be a simple matter of setting a DHCP range and assigning the static IP's outside this range.
The changing of network parameters such as DNS should be a sufficiently rare event that it does not need the added benefit of being changed all in one place. The MTU should be handled by ICMP within the network, and setting the MTU for any external connections upon the routers that handle them. This way the default MTU of 1500 is fine, as if any connections to the outside world need a lower MTU this can be set dynamically upon trying to talk to the outside world.
As to client devices (laptops etc) these do not need a static address (if it needs a static address it should be tied to the network, how many file and print servers do you really need to take to another office anyway?) and so they can use DHCP as far as it is available.
If you need to renumber your network for a site to site VPN, and you have a sufficiently large network that this creates a headache you clearly have planned your network poorly. You could use a static mapping of IP's for a NAT device to transparently map your network to for the off site network, and vice versa for the off site network to the local network. No renumbering required. And this can be handled by a cisco PIX506E which although not consumer hardware, is hardly the most expensive router in the world.
I have been involved in the management of a very
large network (sufficient number of devices to exhaust the available private IP ranges) that was almost entirely static IP assignments out of necessity. No DHCP server would have been able to manage it, and once you start putting a DHCP server into every network segment, you get into silly money, and lose the advantage of managing IPs in one place.
It is about how well the network is managed. If it is managed well it hardly matters how it is managed, it will work. Similarly if it is managed badly then it hardly matters how it is managed, it will break, a lot.
Also, I use different DNS servers depending upon what I need. Most of my network uses OpenDNS for speed and reliability, some of it uses my ISP's DNS servers as it needs to get proper NXDOMAIN responses and the free OpenDNS service does not do that. I have yet to see a DHCP server that would handle that situation that isn't more hassle to set up than it is to just manage it manually. It's really not a huge overhead.