WHS and a Macintosh
I was making a copy some WHS server folders so that I could take a small section of the filesystem with me on holiday (for reference). However half way through the copy I got an error dialog informing me that the copy could not access the source disk. WTF?
Investing I found that some of the files and directories has been created on the WHS server using Samba from a Macintosh. These had trailing spaces in their file names. Win32 will not allow this via the UI however samba subverts this API.
See cause 6:
The solution was to SMB mount this filesystem onto any old Linux box, or the original Mac, and rename the directories and files appropriately.
mkdir /cheese smbmount //cheese/Users /cheese
Unmanaged code invokation
When making an unmanaged call if the imported DLL does not exist you program will crash. Its much better to first check to see if the DLL is there. Here is a snippet in C#.
public const string AVR309DLL = "AVR309.DLL"; [DllImport(AVR309DLL)] public static extern int DoGetInfraCode(ref byte[] TimeCodeDiagram, ref int DiagramLength);
How to we make sure the AVR309DLL library exists?
[DllImport ("kernel32.dll")] private extern static IntPtr LoadLibrary(string fileName); [DllImport("kernel32.dll")] private extern static IntPtr FreeLibrary(IntPtr hModule); public static bool isDLLAvailable() { IntPtr hModule = LoadLibrary(AVR309DLL); bool result = hModule.ToInt32() > 32; if(result) FreeLibrary(hModule); return result; }
x
Scala distributed ping pong
This small scala example demonstrates the simplicity in which distributed programming is possible using the actor model. We setup an ECHOSERVER that simply responds with a PONG to any PING request it receives and we run this on an arbitrary server.
/* * EchoServer.scala */ package echoapp import scala.actors.Actor._ import scala.actors.remote.{Locator,Node,RemoteActor} import scala.actors.remote.RemoteActor.{alive,register} class Server(me : Locator) { RemoteActor.classLoader = getClass().getClassLoader() actor { alive(me.node.port) register(me.name, self) loop { react { case 'PING => reply("PONG") // equiv: sender ! "PONG" case msg => println(msg) } } } } object EchoServer { def main(args: Array[String]) { val me = new Locator(Node("127.0.0.1", 9010), 'echoServer) new Server(me) println("Echo server started") } }
For the sake of brevity the IP address of where the ECHOSERVER is running has been hardcoded. The client sends a PING message to the remote echo server and gathers the response(s).
/* * EchoClient.scala */ package echoapp import scala.actors.Actor._ import scala.actors.remote.{Locator,Node,RemoteActor} import scala.actors.remote.RemoteActor._ class Client(servloc : Locator) { RemoteActor.classLoader = getClass().getClassLoader() val server = select(servloc.node, servloc.name) actor { // The response is sent back to the thread that did the send. // If we move to this to main() then this is where the reponse will go // as we have no react/receive there it will be lost. server ! 'PING loop { react { case msg => println(msg) } } } } object EchoClient { def main(args: Array[String]) { val echoServer = Locator(Node("192.168.1.13", 9010), 'echoServer) new Client(echoServer) } }
Scala distributed Mandelbrot
My first foray into distributed programming with scala. This worked out pretty well. The whole system automatically load balances and recovers elegantly if a worker is killed half way through a render or indeed if another joins whilst a render is in progress.
It has no bells and whistles with regards to zooming and panning so as keep the code as simple as possible.
Running a worker is as easy as. The IP and port on the worker are the IP address of the GUI, and the LOCAL port which the worker will used for listening. If you want to many worker on the same server give each a unique port number
# export CLASSPATH=.:/usr/local/scala/lib/scala-library.jar # cd MandelbrotDistributed/build/classes # java mandelbrotdistributed/MandelWorker 192.168.1.137 4000 "another worker on the same server" # java mandelbrotdistributed/MandelWorker 192.168.1.137 4001
This is the entire code:
/* * Raster.scala */ package mandelbrotdistributed @serializable class Raster(xlen : Int) { var line = new Array[Int](xlen) def width = line.length }
/* * Complex.scala */ package mandelbrotdistributed class Complex ( val a: Double, val b: Double) { def abs() = Math.sqrt(a*a + b*b) // (a,b)(c,d) = (ac - bd, bc + ad) def *(that: Complex) = new Complex(a*that.a-b*that.b, b*that.a+a*that.b) def +(that: Complex) = new Complex(a+that.a, b+that.b) override def toString = a + " + " + b+"i" }
/* * MandelWorker.scala */ package mandelbrotdistributed import scala.actors.Actor._ import scala.actors.remote._ import scala.actors.remote.RemoteActor.select import java.lang.Math import java.net.InetAddress case class Register(me:Locator) case class RenderedLine(m:Locator,row:Int,raster:Raster) case class RenderAction(row:Int,width:Int,height:Int,level:Int) case class Tick class MandelActor(me : Locator, clientLoc : Locator) { RemoteActor.classLoader = getClass().getClassLoader() actor { println("Worker Ready") RemoteActor.alive(me.node.port) RemoteActor.register(me.name, self) loop { react { case RenderAction(row : Int, width : Int, height : Int, level : Int) => println("Raster row "+row) sender ! RenderedLine(me,row, generate(width, height, row, level)) case msg => println("Unhandled message: " + msg) } } } // Register with the GUI every 5 secs (heartbeat) val client = select(clientLoc.node,clientLoc.name) ActorPing.scheduleAtFixedRate(client, Register(me), 0L, 5000L) def iterate(z:Complex, c:Complex, level:Int, i:Int): (Complex,Int) = if(z.abs > 2 || i > level) (z,i) else iterate(z*z+c, c, level, i+1) def generate(width : Int, height : Int, row : Int, level: Int) : Raster = { val raster = new Raster(width) val y = -1.5 + row*3.0/height for { x0 <- 0 until width} { val x = -2.0 + x0*3.0/width val (z, i) = iterate(new Complex(0,0), new Complex(x,y), level, 0) raster.line(x0) = if (z.abs < 2) 0 else i } raster } } object MandelWorker { def main(args: Array[String]) :Unit = { // arg0: remote IP of where the MandelGUI program is running // arg1: a local port for the mandel worker val host = if (args.length >= 1) args(0) else "127.0.0.1" val port = if (args.length >= 2) args(1).toInt else 9010 val gui = Locator(Node(host,9999), 'MandelGUI) val me = new Locator(Node(InetAddress.getLocalHost.getHostAddress, port), 'MandelWorker) new MandelActor(me, gui) } }
/* * MandelGui.scala */ package mandelbrotdistributed import scala.swing._ import scala.swing.event.ButtonClicked import scala.actors.{Actor,OutputChannel} import scala.actors.Actor._ import scala.actors.remote.{RemoteActor,Locator,Node} import scala.actors.remote.RemoteActor._ import scala.collection.mutable.Stack import java.awt.image.BufferedImage import java.awt.{Graphics,Graphics2D} import java.awt.Color object MandelGui extends SimpleGUIApplication { val img = new BufferedImage(480,480,BufferedImage.TYPE_INT_RGB) val mandel = Mandel val drawing = new Panel { background = Color.black preferredSize = (img.getWidth, img.getHeight) override def paintComponent(g : Graphics) : Unit = { g.asInstanceOf[Graphics2D].drawImage(img,null,0,0) } } def clearDrawing() { var g = img.getGraphics g.setColor(Color.BLACK) g.fillRect(0,0,img.getWidth,img.getHeight) } def top = new MainFrame { title="Mandelbrot" contents = new BorderPanel { val control = new BoxPanel(Orientation.Horizontal) { val start = new Button { text = "Start" } val stop = new Button { text = "Stop" } val continue = new Button { text = "Continue" } contents.append(start,stop,continue) listenTo(start,stop,continue) reactions += { case ButtonClicked(`start`) => clearDrawing Mandel.startup case ButtonClicked(`stop`) => Mandel.shutdown case ButtonClicked(`continue`) => Mandel.process } } drawing import BorderPanel.Position._ layout(control) = North layout(drawing) = Center } } object WorkerMgmt { private var allWorkers : List[Worker] = List() val defaultTTL = 6 //sweeps a worker can survive without a register def foreach(op: Worker => Unit) = allWorkers.foreach(op) def findWorkerForRow(row : int) : Worker = { allWorkers.filter(w=> w.row == row)(0) } def find(m:Locator) : Worker = { if (allWorkers.isEmpty) null else { val list = allWorkers.filter(w=> w.loc == m) if(list.isEmpty) null else list(0) } } def register(m:Locator) : Worker = { var worker = find(m) if (worker == null) { worker = new Worker(m, defaultTTL) allWorkers = worker :: allWorkers } worker.keepAlive } def sweep() = { allWorkers.foreach(_.decTTL) val (ok, expired) = allWorkers span (_.ttl >= 0) allWorkers = ok expired } } class Worker(val loc:Locator, val defaultTTL:int) { var row : Int = 0 var ttl:Int = 0 val actor = select(loc.node,loc.name) val iterationDepth = 2048 def decTTL = ttl -= 1 def keepAlive() = { ttl = defaultTTL this } def render(row : Int) { this.row = row actor ! RenderAction(row, img.getWidth, img.getHeight, iterationDepth) } override def toString = loc.toString } object Mandel { object State extends Enumeration { val Running, Stopped = Value } private var state = State.Stopped private var workQueue : Stack[Int] = new Stack() val draw = actor { Actor.loop { react { case (row:Int, raster:Raster) => for(x <- 0 until raster.width) { val shade = raster.line(x) % 256 val rgb = new Color(shade,shade,shade).getRGB img.setRGB(x, row, rgb) } drawing.repaint } } } val a = actor { RemoteActor.alive(9999) // Port RemoteActor.register('MandelGUI, Actor.self) // Sweep non responsive workers every 2sec ActorPing.scheduleAtFixedRate(Actor.self, Tick, 0L, 2000L) Actor.loop { react { case "StartWork" => //print("Start work") WorkerMgmt.foreach(farmWork) case RenderedLine(m:Locator,row:int, raster:Raster) => draw ! (row, raster) // Get it on the screen if(state == State.Running) farmWork(WorkerMgmt.find(m)) case Register(m:Locator) => println("Register "+m) // Register and assign it work; Immediate load balance farmWork(WorkerMgmt.register(m)) case Tick => for(w <- WorkerMgmt.sweep) { println("Unregister "+w) workQueue.push(w.row) // push their row } case msg => println("Unhandled message: "+msg) } } } def farmWork(worker : Worker) { if(workQueue.isEmpty) shutdown else { if(worker != null) worker.render(workQueue.pop) } } def shutdown() { state = State.Stopped } def startup() { if(state == State.Stopped) { workQueue.clear for(row <- 0 to img.getHeight-1) workQueue.push(row) process } } def process() { state = State.Running a ! "StartWork" } } }
/* * ActorPing.scala */ package mandelbrotdistributed import java.util.concurrent._ import scala.actors._ // ============================================= /** * Pings an actor every X seconds. * * Borrowed from Scala TIM sample; which borrows from: * * Code based on code from the ActorPing class in the /lift/ repository (http://liftweb.net). * Copyright: * * (c) 2007 WorldWide Conferencing, LLC * Distributed under an Apache License * http://www.apache.org/licenses/LICENSE-2.0 */ object ActorPing { def scheduleAtFixedRate(to: AbstractActor, msg: Any, initialDelay: Long, period: Long): ScheduledFuture[T] forSome {type T} = { val cmd = new Runnable { def run { // println("***ActorPing Event***"); try { to ! msg } catch { case t:Throwable => t.printStackTrace } } } service.scheduleAtFixedRate(cmd, initialDelay, period, TimeUnit.MILLISECONDS) } private val service = Executors.newSingleThreadScheduledExecutor(threadFactory) private object threadFactory extends ThreadFactory { val threadFactory = Executors.defaultThreadFactory() def newThread(r: Runnable) : Thread = { val d: Thread = threadFactory.newThread(r) d setName "ActorPing" d setDaemon true d } } }
Entire work as NetBeans project : mandelbrotdistributed.zip
Remapping a network 1:1 via OpenVPN
I had this situation where by myself and a friend wanted to share our network with each other via a VPN. However in this case we both had used the same IP address space for our internal LAN (192.168.1.0/24) and bridging wasn't possible as we had used the same numbers in this range.
To get around this problem when the VPN was created it was setup so that it would remap the entire address space of each of our networks into another range so we would not have an overlapping IP space.
In this case the network running the openvpn server will be remapped from 192.168.1.x to 192.168.10.x.
The server is going to perform the NAT so that we can also use a windows openvpn client without issue. This is also useful as a road warrior as lots of cyber-cafes and companys also use this 192.168.1.x range so it allows easy access to a home network without IP overlap. Note: In the example outputs however I use a Linux client.
Inspired by http://www.nimlabs.org/~nim/dirtynat.html there had to be an easier way.
Configuration file for OPENVPN Server. Although I have used tcp:443 I recommend if you can use udp:1194.
openvpn.conf
#Begin server.conf #port 1194 #proto udp port 443 proto tcp dev tun ca keys/ca.crt cert keys/server.crt key keys/server.key # This file should be kept secret dh keys/dh.pem #Make sure this is your tunnel address pool server 10.0.1.0 255.255.255.0 ifconfig-pool-persist ipp.txt #This is the route to push to the client, add more if necessary push "route 192.168.10.0 255.255.255.0" keepalive 10 120 cipher BF-CBC #Blowfish encryption comp-lzo user nobody group nobody persist-key persist-tun status openvpn-status.log verb 6 mute 20 up ./openvpn.up
Now the magic where we NETMAP our network via the OPENVPN tunnel from 192.168.1.0/24 to 192.168.10.0/24. We must also MASQUERADE all packets to eth0 otherwise they will be presented to this interface with the IP of the tun0 nic (10.0.1.x)
$1 will be the name of the virtual network interface passed by the openvpn server in our example this is passed as tun0.
openvpn.up
#!/bin/sh echo 1 > /proc/sys/net/ipv4/ip_forward WLAN=$1 # Clear all chains iptables -F iptables -F -t nat # Accept from the OPENVPN tunnel iptables -I INPUT -i $WLAN -j ACCEPT iptables -I OUTPUT -o $WLAN -j ACCEPT # Remap our network 1:1 to a different IP space. Update openvpn.conf file too # push "route 192.168.10.0 255.255.255.0" iptables -t nat -A PREROUTING -i $WLAN -d 192.168.10.0/24 -j NETMAP --to 192.168.1.0/24 iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
If you have any other IPTABLE configuration on your box you might want to remove the two lines that CLEAR the iptables when openvpn starts up.
Or perhaps use “down ./openvpn.down” to only remove the openvpn rules added by this script.
Server side
On the openvpn server side you're routing table will look like this. The route via the tun0 interface were automatically added by OPENVPN.
[root@bingo openvpn]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.1.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.0.1.0 10.0.1.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 [root@bingo openvpn]#
A quick listing of the interfaces when the tunnel is up shows us what IP address has been used for the VPN tunnel.
[root@bingo openvpn]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:B0:C2:02:1E:1E inet addr:192.168.1.13 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::2b0:c2ff:fe02:1e1e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:55623840 errors:0 dropped:137 overruns:0 frame:0 TX packets:57635730 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4247993585 (3.9 GiB) TX bytes:1775265718 (1.6 GiB) Interrupt:10 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:476213 errors:0 dropped:0 overruns:0 frame:0 TX packets:476213 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:43755155 (41.7 MiB) TX bytes:43755155 (41.7 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.0.1.1 P-t-P:10.0.1.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:487388 errors:0 dropped:0 overruns:0 frame:0 TX packets:319625 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:184935466 (176.3 MiB) TX bytes:19783112 (18.8 MiB)
Client Side
The client side configuration is simple enough. I like to have a single PKCS12 key on the client.
Combine client keys into a pkcs12 file
openssl pkcs12 -export -in client.crt -inkey client.key -certfile ca.crt -out dbzoo-cert.p12
openvpn.conf
######################################## # OpenVPN Client Configuration client dev tun proto tcp remote remote.site.com 443 nobind persist-tun persist-key keepalive 60 360 pkcs12 dbzoo-cert.p12 comp-lzo verb 4 mute 5
Once the tunnel has been established your client side routing table will look like this. The 192.168.10.0 network route is a result of the push from the openvpn server.
[root@vpnout root]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.1.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 10.0.1.1 10.0.1.5 255.255.255.255 UGH 0 0 0 tun0 192.168.10.0 10.0.1.5 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
Client side network interface configuration
# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:0C:29:FE:6C:8D inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:899708 errors:0 dropped:0 overruns:0 frame:0 TX packets:870730 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:243943585 (232.6 Mb) TX bytes:254323062 (242.5 Mb) Interrupt:11 Base address:0x1424 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:62 errors:0 dropped:0 overruns:0 frame:0 TX packets:62 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4564 (4.4 Kb) TX bytes:4564 (4.4 Kb) tun0 Link encap:Point-to-Point Protocol inet addr:10.0.1.6 P-t-P:10.0.1.5 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:278295 errors:0 dropped:0 overruns:0 frame:0 TX packets:423061 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:17739929 (16.9 Mb) TX bytes:176600254 (168.4 Mb)
Traceroute
Performing a traceroute from the CLIENT side out to a box on the SERVER side which has an IP of 192.168.1.20 but via the NAT/VPN tunnel will be 192.168.10.20.
[root@vpnout root]# traceroute 192.168.10.20 traceroute to 192.168.10.20 (192.168.10.20), 30 hops max, 38 byte packets 1 10.0.1.1 (10.0.1.1) 56.439 ms 69.490 ms 59.346 ms 2 192.168.10.20 (192.168.10.20) 31.985 ms 29.301 ms 63.890 ms [root@vpnout root]#
We can see the 10.0.1.1 is the IP address of the Point-to-Point VPN server endpoint (tun0) when the VPN server route's out eth0 it will be NAT'd into the 192.168.10.x address space.