Friday, December 28, 2012

OSSEC error 'remote_commands'...

While upgrading one of the agents from ossec version 2.6 to 2.7, I was testing agent configuration and I got the following error message:
ossec-logcollector(2301): ERROR: Definition not found for: 'logcollector.remote_commands'.
It didn't appear before, and more importantly, I haven't had a slightest idea what's the problem! So, I decided to dig a bit further to find out. BTW, I removed timestamp column from the log entry as it is not important here.

So, what I found is that this is a new configuration variable introduced in 2.7 version of OSSEC. It is expected to be defined in internal_options.conf file. The reason I got it is that my internal_options.conf was from 2.6.

This variable is a boolean flag (accepted values are 0 and 1) and its purpose is to allow administrator to control whether the agent will accept commands from the manger, or not. This value is used when configuration is loaded, here. If it is set to 0 then any command configurations will be ignored, e.g. the ones like the following one:
<command>
    <name>host-deny</name>
    <executable>host-deny.sh</executable>
    <expect>srcip</expect>
    <timeout_allowed>yes</timeout_allowed>
</command>
For each ignored configuration entry, there will be appropriate notification message in the log file, something like the following message:
Remote commands are not accepted from the manager. Ignoring it on the agent.conf

Thursday, December 27, 2012

UDP Lite...

Many people know for TCP and UDP, at least those that work in the field of networking or are learning computer networks in some course. But, the truth is that there are others too, e.g. SCTP, DCCP, UDP Lite. And all of those are actually implemented in Linux kernel. What I'm going to do is describe each one of those in the following few posts and give examples of their use. In this post, I'm going to talk about UDP Lite. I'll assume that you know UDP and also that you know how to use socket API to write UDP application.

UDP Lite is specified in RFC3828: The Lightweight User Datagram Protocol (UDP-Lite) . The basic motivation for the introduction of a new variant of UDP is that certain applications (primarily multimedia ones) want to receive packets even if they are damaged. The reason is that codecs used can recover and mask errors. UDP itself has a checksum field that covers the whole packet and if there is an error in the packet, it is silently dropped. It should be noted that this checksum is quite weak actually and doesn't catch a lot of errors, but nevertheless it is problematic for such applications. So, UDP lite changes standard UDP behavior in that it allows only part of the packet to be covered with a checksum. And, because it is now different protocol, new protocol ID is assigned to it, i.e. 136.

So, how to use UDP Lite in you applications? Actually, very easy. First, when creating socket you have to specify that you want UDP Lite, and not (default) UDP:
s = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDPLITE);
Next, you need to define what part of the packet will be protected by a checksum. This is achieved with socket options, i.e. setsockopt(2) system call. Here is the function that will set how many octets of the packet has to be protected:
void setoption(int sockfd, int option, int value)
{
    if (setsockopt(sockfd, IPPROTO_UDPLITE, option,
            (void *)&value, sizeof(value)) == -1) {
        perror("setsockopt");
        exit(1);
    }
}
It receives socket handle (sockfd) created with socket function, option that should be set (option) and the option's value (value). There are two options, UDPLITE_SEND_CSCOV and UDPLITE_RECV_CSCOV. Option UDPLITE_SEND_CSCOV sets the number of protected octets for outgoing packets, and UDPLITE_RECV_CSCOV sets at least how many octets have to be protected in the inbound packets in order to be passed to the application.

You can also obtain values using the following function:
int getoption(int sockfd, int option)
{
  int cov;
  socklen_t len = sizeof(int);
  if (getsockopt(sockfd, IPPROTO_UDPLITE, option,
  (void *)&cov, &len) == -1) {
  perror("getsockopt");
exit(1);
}
return cov;
}
This function accepts socket (sockfd) and option it should retrieve (i.e. UDPLITE_SEND_CSCOV or UDPLITE_RECV_CSCOV) and returns the option's value. Note that the two constants, UDPLITE_SEND_CSCOV or UDPLITE_RECV_CSCOV, should be explicitly defined in your source because it is possible that glibc doesn't (yet) define them.

I wrote fully functional client and server applications you can download and test. To compile them you don't need any special options. So that should be easy. Only change you'll probably need is the IP address that clients sends packets to. This is a constant SERVER_IPADDR which contains server's IP address hex encoded. For example, IP address 127.0.0.1 is 0x7f000001.

Finally, I have to say that UDP Lite will probably have problems traversing NATs. For example, I tried  it on my ADSL connection and it didn't pass through the NAT. What I did is that I just started client with IP address of one of my servers on the Internet, and on that server I sniffed packets. Nothing came to the server. This will probably be a big problem for the adoption of UDP Lite, but the time will tell...

You can read more about this subject on the Wikipedia page and in the Linux manual page udplite(7).

RSA parameters in PEM file...

If you've ever read anything about how RSA works you most probably know that RSA is based on arithmetic modulo some large integer. First, let N be a product of two, very large, primes p and q. N is n-bit number, those days minimally 1024 bits, and thus p and q are half that size, so that after multiplication we get required number of bits. Next, there are two numbers e and d that satisfy relation , where . is encryption key - or what's frequently referred to as a public key, while is decryption key, or a private key. The question now is: Given a PEM file with private key, how to find out those parameters for a specific public/private keys? In this post I'll show three ways to do it. The first two are using tools that are a standard part of OpenSSL/GnuTLS libraries. The third one is using Python.

OpenSSL/GnuTLS

Those two, especially OpenSSL, are very popular and complex cryptograhpic libraries. Both of them come with a very capable command line tools. In the case of OpenSSL the there is a single binary, openssl, that performs its function depending on the second argument. In this post, I'll use arguments rsa and genrsa that manipulate and generate, respectively, RSA keys. GnuTLS, on the other hand, has a several tools, of which we'll use certtool.

Now, first thing is to create RSA public/private key. You can do this using the following commands:
openssl genrsa -out key.pem 512
or, if you use GnuTLS, then:
certtool --generate-privkey --bits 512 --outfile key.pem
In both cases the generated RSA modulus (N) will have 512 bits, and the keys will be written into output file key.pem. You should note few things here:
  1. The output file contains both private and public keys!
  2. We didn't encrypt output file, which is recommended to do.
  3. 512 bits these days is a way insecure. Use minimally 1024 bits.
Ok, now we can request information about this RSA keys. Again, you can do that using openssl in the following way:
openssl rsa -in key.pem -noout -text
or, using GnuTLS:
certtool -k --infile key.pem
In both cases you'll receive the following output (GnuTLS is a bit more verbose):
Private-Key: (512 bit)
modulus:
    00:ba:b6:78:3b:1c:15:f1:d9:e3:48:16:5e:e7:8e:
    fd:a0:9d:2f:ee:1b:b8:9b:3d:d3:ea:f4:ad:fb:1b:
    6e:ef:b2:b5:cd:ee:38:e9:f8:6d:64:c9:ea:95:ae:
    87:13:5a:23:8b:2f:0b:e8:bb:c6:f8:c6:c4:ee:64:
    3c:d4:97:bd:a3
publicExponent: 65537 (0x10001)
privateExponent:
    39:6b:07:ca:55:b6:c1:eb:59:a3:bf:8d:6b:f4:63:
    36:d3:5f:fb:ff:76:63:f7:3d:86:51:bc:77:2e:56:
    8d:4b:87:73:e0:53:bd:17:e8:4a:e8:df:f5:86:14:
    65:60:f2:4f:03:02:3b:e9:23:c6:d3:ce:b3:1d:e9:
    13:1a:0f:b1
prime1:
    00:d6:b1:f9:53:8c:56:96:79:c0:bd:68:6c:b9:07:
    e7:9c:70:de:f5:61:ed:bb:51:12:1d:24:37:0f:cc:
    bf:8a:95
prime2:
    00:de:a2:52:be:a1:a4:eb:d7:48:24:95:c5:2c:05:
    bd:5f:7f:74:d5:12:bd:7c:5f:f1:8e:45:a2:50:26:
    ec:d1:57
exponent1:
    66:61:30:58:1b:10:1f:69:a7:f3:aa:9c:4e:0f:ea:
    ee:bb:14:57:47:7f:aa:57:9a:9f:b2:e9:5e:eb:70:
    5b:91
exponent2:
    22:2d:8f:40:5e:b6:5f:d2:5b:eb:e9:e6:2c:1c:f1:
    76:90:ad:91:ec:5f:94:91:72:16:e2:4f:c9:b8:40:
    10:df
coefficient:
    22:9c:f3:1f:85:68:a3:36:ab:07:87:ed:a4:c0:e5:
    ef:13:a8:28:02:55:35:c1:76:96:86:97:58:08:90:
    6e:70
In this output we see parameters N (modulus), e (publicExponent), d (privateExponent), p (prime1), and q (prime2). Also, what's written in there are values d mod (p-1) (exponent1),  d mod (q-1) (exponent2) and q-1 mod p (coefficient).

The interesting thing to note is the value of publicExponent, it is 65537. Namely, the size of public exponent isn't so important and by having such a low value it is relatively easy to encrypt messages (rising to this exponent requires 17 multiplications). This is very frequent value for encryption key, i.e. public exponent. privateExponent, on the other hand, has to be large and random, so that it isn't easy to guess.

Python

The recommended library to use to manipulate RSA keys is m2crypto which is a wrapper to OpenSSL library. There are many other libraries, of course, and you can find an older comparison here. Unfortunately, I was unable to find downloadable version on the Internet.

Anyway, to be able to manipulate RSA keys you have to import RSA module from M2Crypto, i.e.:
from M2Crypto import RSA
Then, to load RSA private/public key from a file use the following line:
rsa=RSA.load_key('key.pem')
You can also generate new RSA private key as follows:
rsa=RSA.gen_key(512, 65537)
In this case you are generating private key based on public key 65537 (we saw that this is very frequently used public key) and we require 512 bit modulus. You can find out modulus length using Python's len() function, i.e.:
>>> len(rsa)
512
To obtain N, you can inquire attribute n of the object holding RSA key, i.e.:
>>> rsa.n
To obtain pair (e,d), or public key, you can use the following method:
>>> RSA.pub()
Alternatively, you can find out only e in the following way:
>>> rsa.e
As far as I could see, there is no way to find other parameters besides N, e and bit length.

Tuesday, December 25, 2012

Controlling which congestion control algorithm is used in Linux

Linux kernel has a quite advanced networking stack, and that's also true for congestion control. It is a very advanced implementation who's primary characteristics are modular structure and flexibility. All the specific congestion control algorithms are separated into loadable modules. The following congestion control mechanisms are available in the mainline kernel tree:
Default, system wide, congestion control algorithm is Cubic. You can check that by inspecting the content of the file /proc/sys/net/ipv4/tcp_congestion_control:
$ cat /proc/sys/net/ipv4/tcp_congestion_control 
cubic
So, to change system-wide default you only have to write a name of congestion control algorithm to the same file. For example, to change it to reno you would do it this way:
# echo reno > /proc/sys/net/ipv4/tcp_congestion_control
# cat /proc/sys/net/ipv4/tcp_congestion_control
reno
Note that, to change the value, you have to be the root user. As the root you can specify any available congestion algorithm you wish. In the case the algorithm you specified isn't loaded into the kernel, via standard kernel module mechanism, it will be automatically loaded. To see what congestion control algorithms are currently loaded take a look into the content of the file /proc/sys/net/ipv4/tcp_available_congestion_control:
$ cat /proc/sys/net/ipv4/tcp_available_congestion_control
vegas lp reno cubic
It is also possible to change congestion control algorithm on a per-socket basis using setsockopt(2) system call. Here is the essential part of the code to do that:
...
int s, ns, optlen;
char optval[TCP_CA_NAME_MAX];
...
s = socket(AF_INET, SOCK_STREAM, 0);
...
ns = accept(s, ...);
...
strcpy(optval, "reno");
optlen = strlen(optval);
if (setsockopt(ns, IPPROTO_TCP, TCP_CONGESTION, optval, optlen) < 0) {
    perror("setsockopt");
    return 1;
}
In this fragment we are setting congestion control algorithm to reno. Note that that the constant TCP_CA_NAME_MAX (value 16) isn't defined in system include files so they have to be explicitly defined in your sources.

When you are using this way of defining congestion control algorithm, you should be aware of few things:
  1. You can change congestion control algorithm as an ordinary user.
  2. If you are not root user, then you are only allowed to use congestion control algorithms specified in the file /proc/sys/net/ipv4/tcp_allowed_congestion_control. For all the other you'll receive error message.
  3. No congestion control algorithm is bound to socket until it is in the connected state.
You can also obtain current control congestion algorithm using the following snippet of the code:
optlen = TCP_CA_NAME_MAX;
if (getsockopt(ns, IPPROTO_TCP, TCP_CONGESTION, optval, &optlen) < 0) {
    perror("getsockopt");
    return 1;
}
Here you can download a code you can compile and run. To compile it just run gcc on it without any special options. This code will start server (it will listen on port 10000). Connect to it using telnet (telnet localhost 10000) in another terminal and the moment you do that you'll see that the example code printed default congestion control algorithm and then it changed it to reno. It will then close connection.

Instead of the conclusion I'll warn you that this congestion control algorithm manipulation isn't portable to other systems and if you use this in your code you are bound to Linux kernel.

Monday, December 24, 2012

Pretjerivanje na temu genijalaca, nadarenosti i ostalih gluposti...


Svima nam je poznato proglašavanje genijalaca u osnovnim i srednjim školama, kao dijete se sjećam toga, a vjerujem da ni danas nije drugačije. Taj kult genijalaca se dovodi do te mjere da se svima ostalima, indirektno, govori kako su glupi i da je bolje da ne pokušavaju jer neće uspjeti, nisu genijalci! Istovremeno, "genijalcima" se kaže kako ne moraju ništa raditi i da će im sve ići lako. Taj način razmišljanja počeo se provlačiti i na fakultetima (da ne spomenem kojima) i to me je počelo izuzetno jako uzrujavati, i to je povod ovog posta.

Ali prvo, jedna zanimljivost. Za početak, genijalnost, odnosno nadarenost, po nekakvom uvriježenom mišljenju je genetska karakteristika, nešto što se dobija rođenjem i više se ne može promijeniti. Nadalje, genijalnost i nadarenost se na različite načine institucionalizira, primjerice nekakvim posebnim programima i slično. Da li je nekome palo na pamet da se na taj način na mala vrata u društvo uvodi sistem sličan indijskim kastama? A još zanimljivije, koliko se sjećam, Ustav jamči jednakost svima! Da li je jednakost ako se kaže da je nešto za nekog tko je nadaren, a za ostale nije i zbog svoje genetike oni to ne mogu?!

Ali dobro, idemo na bit, jer mene sve to proglašavanje genijalaca i slično, jako iritira. Mislim da je to jako štetno za društvo, a i za pojedince.

Relativno puno ljudi shvaća da vrhunski sportaš ne nastaje preko noći i također dosta ljudi sluti kako se radi o velikim odricanjima, iako, mislim da su rijetki svjesni o kolikima se radi i o tome bi se mogao pisati poseban post. Međutim, ja vjerujem da ista stvar vrijedi i za intelekt, naime, ne postaje se stručnjak preko noći već je potreban rad, upornost i odricanje. To mnogi ne shvaćaju! I onda misle da ako je netko nešto pokušavao tjedan, dva, mjesec, pa možda i godinu, i nije uspio, da je onda glup. A ne znaju da su potrebne godine rada. Koliko godina? Pa ovisi što ste prije toga radili, i kako!

Vrhunski stručnjak ne postaje se preko noći već su za to potrebne godine predanog rada! Ne vjerujete? Preporučam onda da pročitate članak  The Expert Mind iz Scientific Americana (ovdje imate besplatnu verziju). U tom članku se opisuje istraživanje kako nastaju šahovski velemajstori. Razlog zašto baš šahovski velemajstori je zbog toga što je relativno lako pratiti (mjeriti!) napredak. Rezultati tog istraživanja su fascinantni. Ja se sjećam od malena kako su oni koji igraju vrhunski šah proglašavani genijalcima! I igranje šaha je uvijek bilo identificirano s inteligencijom, odnosno, nadarenošću i genijalnosti. E pa, zaključak tog istraživanja je da šahovski velemajstori, kao i eksperti za druga područja, nastaju učenjem a ne rođenjem! Osim toga, danas imate šahovske programe koji su razine velemajstora, a neki su uspjeli potući i svjetskog prvaka. Da li su ti programi genijalci? Mislim da ćemo se svi složiti da nisu!

Sada se pitate zašto to uopće spominjem? Pa zato što je jedna od stvari koja me neopisivo uzrujava spominjanje nekakvih genijalaca, nadarenih i sličnih gluposti (namjerno kažem gluposti!). To kreće već negdje u osnovnoj školi i onda se provlači kroz cijelo školovanje. Samo po sebi to ne bi bilo loše da se na temelju toga onda ne radi diskriminacija, a i da se na neki način uništavaju ljudi, i one koji se proglašavaju genijalnima i one koje se ne proglašava genijalnima, odnosno, smatra ih se ispod prosječnim ili prosječnim! Neću ni spominjati tendenciju roditelja da svoju djecu proglašavaju novim genijalcima! Dakle, želite li da ponovim još jednom što tvrdim? Ovaj puta puno jasnije? Može, evo:
Proglašavanje ljudi genijalcima nema veze sa stvarnošću, a istovremeno uništava te ljude isto kao što uništava i omalovažava ljude koji se ne proglašavaju genijalcima. O onima koje se smatra nesposobnima, a pogotovo oni koji sebe smatraju nesposobnima, neću ni pričati! Taj isti pristup štetan je i za društvo u cjelini jer ono ne iskorištava sav potencijal koji ima na raspolaganju, a koji je vrlo dragocjen.
Sad već vas čujem kako vičete: Pa djeca se međusobno razlikuju, neki su bolji, neki lošiji! Da, to stoji, doista su različiti. I ja uopće ne tvrdim da su isti i u tome se slažemo! Međutim, ono u čemu se ne slažemo je što je uzrok tome! Naime, popularno mišljenje je da su genijalci u pitanju, a ja tvrdim da to nije istina! Naime, iz tih riječi, genijalac, nadareni i slično slijedi kako se radi o nečemu što je stečeno rođenjem (genetikom) te da se na to ne može utjecati, odnosno, da se može samo minorno korigirati.

Ja tvrdim da su značajne razlike u sljedećem:
  • motivacija i interes - nisu svi jednako motivirani za nešto, niti sve zanima sve. Ovo dvoje mislim da je vrlo isprepleteno, odnosno, može djelovati jedno na drugo.
  • rad i upornost - biti motiviran i zainteresiran nije dovoljno, treba rada i upornosti da se nešto postigne i to kroz dulji vremenski period.
  • napredak kroz rad - raditi samo po sebi nije dovoljno. Primjerice, ako se učenje stranog jezika obavlja ponavljanjem jedne te ista rečenica tisuću puta na dan, svaki dan, tko god tako radio koliko god motiviran i uporan bio, neće puno postići.
  • dostupnost potrebnih materijala - kakva korist od volje i svega, ako nema materijala iz kojih će se učiti i s pomoću kojih će se učiti. Tu bi još uvrstio pitanje kakav smo tip osobe i da li materijali odgovaraju tom tipu, primjerice, neki su više auditivni tipovi dok drugi vizualni.
  • pomoć stručnjaka - ako imate nekoga stručnjaka pri ruci, to može izuzetno puno pomoći, ali na žalost i isto toliko odmoći ako se tome ne pristupa na pravi način.
  • reakcija na pogreške i rezultate - većina ljudi pokušava što prije riješiti neki problem i pri tome uopće ne pokušavaju naučiti iz pogrešaka, već ih samo što prije ukloniti, dok kada se dobije rezultat zadovoljni su bez obzira što vide da bi moglo i bolje!
Tu se naravno, prikriveno u svim tim točkama, nalazi i poticaj okoline. U stvari, kada mi kao djeca stičemo prve navike od roditelja, oni se reflektiraju na sve te razlike i u konačnici nas oblikuju kao osobe u inicijalnoj - i vrlo bitnoj - fazi našeg života.

Kad se samo sjetim svoje osnovne i srednje škole, skoro da mi je zlo! Nekompetentni i nezainteresirani učitelji/nastavnici/profesori, koji su i sami gajili kult genijalaca i nisu pokušavali nama, meni, kao djeci, objasniti zablude koje smo imali. A kako i bi, kad su i oni sami vjerovali u te zablude. No, moram reći da je sustav tako napravljen da ih nije ni poticao da budu drugačiji, a osim toga ponekad nisu ni znali što mi mislimo - iako je to škakljiv argumnet jer može se reći da im je posao da to znaju. No bez obzira na sve, u konačnici, nekako mi se čini da je moje trenutno stanje posljedica djelovanja roditelja (moraš učiti, to ti je kruh!) i velike sreće, s čim se mnogi drugi ne mogu pohvaliti.

Zaključit ću tako što ću reći da je vjerojatno istina, kao i obično, negdje na sredini. Međutim, mislim da društvo pretjeruje s veličanjem IQ testova i genijalaca, i u tom smislu uništava mnoge ljude te šteti samome sebi.

Za kraj, evo Vam još tekstova za čitanje ako ste zainteresirani:
Sljedeće poveznice dodavane su dosta kasnije u odnosu na vrijeme nastanka ovog posta:

Sunday, December 9, 2012

Few remarks on CS144...

I'm teaching computer networks for the past 10 or so years, and during that time I got used to a certain approach in teaching this subject. But as I already noted in the post about e2e design principle and middleboxes, I'm watching course Introduction to Computer Networks (CS144), given on Stanford university. The main reason being that I wanted to see how others are doing it. While I was watching the part of the first lecture, What is Internet - 4 layers, I had some comments, so I decided to write a post about it. But then, I decided to comment on the whole course, not just a single lecture. At least that is my intention at this moment.

One very important thing before I start. Note that every course has to simplify things and remove as much details as possible in order to make things "learnable". So, sometimes lecturers don't tell complete truth, or even they say something that isn't truth. This is acceptable as long as they correct themselves eventually. But because of this it means that there are many approaches to teach something, potentially very different, and I'm looking from the viewpoint of one specific approach, namely the one I'm using. This, in turn, might mean that some, or even majority, will not agree with my comments on CS144 in this post. That's perfectly OK, but anyone reading this post should bear that in mind and not take things for granted!

What is Internet - 4 Layers

The purpose of this lecture is to teach you about layering in the networks. This is a very important concept that is a mandatory knowledge for anyone doing anything that touches networking.

But there are few things that I don't like in the approach taken by CS144. First, more correctness problem than something major, is when the lecturer notes where particular layers are implemented. He doesn't give a complete information in this case because he says that everything below application layer is implemented in the operating system. But, the truth is that parts of "link layer" are implemented in hardware, which definitely isn't an operating system, and within firmware which also isn't part of an operating system. Also, where is the line between hardware/firmware and operating system greatly varies. There are, for example, hardware accelerators for TCP, and in that case hardware/firmware reaches almost up to application layer.

Next, ISO/OSI RM is mentioned only briefly, and the comment was "... that was widely used." It was introduced because network layer is frequently called Layer 3, while in model used in this course it is in layer 2. Before continuing, let me just note that Layer 2 is also frequently used, and Layer 7 (L7) isn't so rarely used, either. Anyway, first ISO/OSI has never been widely used for the purpose it was created, unless you count bureaucratic work done within OSI, which is a lot of (bureaucratic) work! On the other hand, it is widely used as a reference model, i.e. it is used to compare different networks. And also it is widely used when we try to be a general, not tied to a specific network. After all, Internet is only one instance of many other networks, past and future. Now, I agree that it is good policy these days to stick to the Internet when teaching basics of networking. But it should be clear that Internet isn't the only network around. If OSI did something right (well, truth to be told, they did several things right), than it is the stuff around network model (or architecture). Note that there are some things that are not right (e.g. number of layers), but in general it is very well thought subject. By the way, physical layer has much more to do than only wires and connectors, if nothing else because there are three main ways of communication (wireless, wired and optical) and then there are countless number of variations within each of those.

Now, when the lecturer compares the 4 layer model he uses with ISO, he says that TCP covers transport and session layers of ISO/OSI RM. This is the first time ever that I heard that TCP covers session layer. This is based on his premise that the purpose of session layer is connection establishment. But, that's simply not true. The purpose of session layer is management of multiple connections, which can degrade into a single connection and in that case session layer is very thin - in terms of the functionality. On the other hand, connection establishment for a single connection is part of the specific protocol within transport layer (there are of course those that don't have connection establishment). Take for example OSI transport protocol TP4 which has connection establishment, transfer and disconnect phases, just like TCP and OSI definitely places it in transport layer, not session layer!

Finally, the lecture implies that layers are the same thing as protocols, i.e. that transport layer is TCP. But, the layer is just a concept, while TCP is an entity, implementation, that logically belongs to a certain layer.

What is Internet - IP Service Model

This lecture is about IP service, which, as the lecturer says at the beginning is what IP offers to layer above and what it expects from layer below. But I think that this lecture actually mixes services expected from layers below and above, with the inner workings of the IP that are invisible to higher/lower layers:
  • Packet fragmenting isn't visible outside of the IP protocol because it is the task of the IP itself to defragment fragmented packets before handing data to the protocol in the layer above. Also, when IP fragmets packets the protocols in lower layers don't know, neither they care, if those are fragments or not. They are treated as opaque data by the lower layer protocols.
  • Feature to prevent packets from looping forever is also internal mechanism to IP protocol, and not something that higher or lower layers should know or care about. True, there is ICMP message that informs sender that this happened, but as I said, it is not intended for other layers. If nothing else, because those layers don't determine the value of TTL field. It is a sole discretion of IP protocol itself.
  • Checksum in the IP packet isn't used to prevent IP packet to be delivered to wrong destination. Let me cite RFC791 which says that:
    This checksum at the internet level is intended to protect the internet header fields from transmission errors.
    So it is intended to protect header from errors, not to prevent deliver of IP packet to wrong destination. True, it might happen that the error occurs in destination address and that, in that case, delivery is prevented but this is only a special case, a consequence, not something specifically targeted.
    Furthermore, while I'm at checksum, it uses simple addition and thus it is a very weak protection mechanism. Actually, it was so useless, and also it was slowing routers, so it was removed in IPv6. By the way, the same version of checksum is equally useful in TCP.
  • Options within IPv4 are, again, specific to IPv4 protocol and not something offered as a service to higher layers.
I have to admit that the bullet "Allows for a new versions of IP" totally confused me?

Next, the definition of connectionless service is that no state is established in the network. That is true, but the point is that it is not the feature of a service but of the protocol operation, and thus protocols above (i.e. in higher layers) simply don't care about that. It is possible for some protocol to offer connection oriented service while operating over connectionless "subnetwork" (e.g. TCP over IP) as it is possible to offer connectionless service over connection oriented "subnetwork (e.g. IP over ATM). More about connectionless vs. connestion oriented you can read in my other post.

Note, the term IP layer is somewhat wrong, or at least discussable  Namely, there is no IP layer but network layer in which one of the protocols is IP protocol. Now, I'm aware that many say IP layer so, if we assume that the majority is right, then I'm wrong. :)

Also, for the end of this part it was interesting to see the mixed use of the terms datagram and a packet. I'm almost always using the term packet, rarely datagram, but I'll have to take a look at this more closely.

Anyway, could be that the lectures of this course and I have different view on what "service model" is, but I didn't notice that they defined what they mean by it, they just started to explain service model of different protocols.

Now, while solving quizzes the following questions surprised me:
  • An Internet router is allowed to drop packets when it has insufficient resources -- this is the idea of "best effort" service. There can also be cases when resources are available (e.g., link capacity) but the router drops the packet anyways. Which of the following are examples of scenarios where a router drops a packet even when it has sufficient resources?

    I thought that the answer was a, c and d (corrupted packet). But, d was rejected.
  • In an alternative to the Internet Protocol called "ATM" proposed in the 1990s, the source and destination address is replaced by a unique "flow" identifier that is dynamically created for each new end-to-end communication. Before the communication starts, the flow identifier is added to each router along the path, to indicate the next hop, and then removed when the communication is over. What are the consequences of this design choice?

    Here, I thought that the answers are a and c. But apparently, a and d were accepted. Now, c says that  there is a need for control entity to manage flow labels. Might be that I misunderstood "control entity", that it actually means something centralized. In that case probably I'm wrong. And d says there is no more need for transport layer. I would like to hear some arguments for that. Anyway, I'll have to read a bit more details about ATM, after all.

What is internet - TCP UDP

This video starts with the introduction in which the following sentence is stated: ... two different transport layer services, one of them is TCP and the other is UDP. The problem is that TCP and UDP are not services but protocols that offer some service.

"TCP is an example of transport layer". As I said, TCP is protocol, not a layer!

I wouldn't say that the property "stream of bytes" means that the bytes will be delivered in order. That's more the property of reliability. What "stream of bytes" means, in the case of TCP, is that there is no concept of the message and message boundaries. So, if the application sends two times 500 octets, it can be delivered on the other end in one go of 1000 octets, in three rounds, etc.

Source port isn't only used so that TCP knows where to send back data, but also for receiving entity to know how to demultiplex incoming TCP segment. Namely, every connection is uniquely identified by a four tuple (IP src addr, src port, IP dst addr, dst addr) and so source port is used for demultiplexing.

Checksum in TCP is quite weak, as I already argued, so it is not particularly good mechanism for detecting errors.

It is possible that TCP connection is closed in three exchanges, but could be that this will be explained later.

What is the Internet - ICMP

I have to admit that placing ICMP in transport layer is quite a novel approach to layering Internet protocols. The lecturer says that strictly speaking it uses IP and thus it belongs to transport layer. The truth is that it is far from clear where this protocol is, but the point is that when you place protocols in different layers it is not only what the protocol uses, but also what it offers and for what it is used - with respect to layer functionality. So, when we talk about ICMP, it doesn't offer services to layer above, that would be application layer, but it doesn't offer services to transport layer, either. Also, transport layer offers end-to-end communication services to application layer. Note that ICMP, on the other hand, allows communication of network layer entities (IP protocols) between any two nodes within the network. It is produced and consumed by IP protocol implemenations.

Two additional things have to be clarified that someone might take out now and counter argument me. First, there are applications that use ICMP, ping and traceroute. The truth is that ICMP actually was never designed to be used by applications, neither ping nor traceroute (especially not traceroute, search for the word "jelaous" on this page, its an interesting story). It just turned out that something can be used for the purpose not intended initially and so we now have those applications. But, I think that ping and traceroute access directly network layer, that is ICMP.

The second thing that someone might use to say that ICMP isn't in the network layer is OSPF. Namely, OSPF uses directly IP for a transfer service, not UDP nor TCP. So, someone might say that by placing ICMP into the network layer I'm placing OSPF to network layer too. There are those that think that OSPF is there. But, I think that OSPF is in application layer, along with other routing protocols. And that is for two reasons:
  1. Routing protocols communicate from end-to-end. It doesn't matter that "end" in this case might be, and is, a network router somewhere within the network, the point is that OSPF application treats that as intended destionations, ends. With ICMP, any node might - for example - drop a packet and generate Time Exceeded message. Note that the node generating error message isn't an end point of the communication!
  2. The functionality of the protocols is vastly different. And not only that, but also who is consuming the packets. ICMP is consumed, and generated, by IP protocol. (minus ping/traceroute for whom I already said that they are a special cases). OSPF on the other hand, is quite a complex protocol and IP protocol directly hands data to OSPF application process. IP doesn't consume those messages, neither it produces them.
So, I think OSPF is in application layer, while ICMP is in top part of the network layer.

Additionally, let me return to the lecture slides. Slide number 3 shows data for ICMP coming from the application. It's not true, data comes from the network layer itself, and ping and traceroute are misusing layering.

On slide 5 ICMP is treated as a network protocol in a sense like IP is. But I think that it's misleading. This actually leads me to one more argument why ICMP belongs to the network layer. Namely, ICMP doesn't have any separate implementation, there is no ICMP module within an operating system. There is IP module (protocol implementation) that produces and consumes ICMP messages.

Ok, so much about that lecture. Finally, when I was trying to solve quizzes, I had a problem with a first question: Which of the following statements are true about the ICMP service model? The offered answers were:
  1. ICMP messages are typically used to diagnose network problems. This is true, but it's not service model.
  2. Some routers would prioritize ICMP messages over other packets. This one isn' true. The routers treat ICMP messages as any other message (unless specifically configured to do so).
  3. ICMP messages are useless, since they do not transport actual data. ICMP is definitely not useless.
  4. ICMP messages can be maliciously used to scan a network and identify network devices. Yes, they can, but it's not a service model what this question asks.
  5. ICMP messages are reliably transmitted over the Internet. They are transferred in IP which is unreliable.
After trial and error it turned out that b is also true!? But then again, I can say that I made mistake because I didn't read that "some would" prioritize, which could be true, and "would" doesn't mean it is necessarily so. Huh, I hate when someone plays with words.

Ok, I'll stop here because this post is brewing for too long, and as I'm having much other work to do, it will take time until I watch all the lectures. Not to mention that it becomes quite large. So, I decided to publish this, and expect new posts eventually...

Thursday, December 6, 2012

Biser naših neukih novinara 8...

Dakle, evo novog bisera u kojima naši neuki novinari prenose (neću reći pišu!) s nerazumijevanjem o stvarima o kojima ništa ne znaju. Povod ovaj puta je pokušaj ITU-a da uvede kontrolu nad Internetom, i o tome sam već pisao, a i još ću kako stvari stoje. Nego, vratimo se temi ovog posta, a to je način na koji su naši novinari pisali o tom događaju.

Prvo što sam vidio bila je vijest na Monitoru o tome, i kao i uvijek na Monitoru su stavili linkove na druge novine koji pišu o toj temi detaljnije. Ali, pogledajmo prvo što su na Monitoru napisali. Evo ovdje c/p za referencu:
Članovi UN-ove Međunarodne telekomunikacijske unije dogovorili su usvojiti novi internet standard koji će omogućiti telekomunikacijskim kompanijama lakši nadzor podataka na internetu. Osnovat će se posebna inspekcija DPI koja će, prema tvrdnjama UN-a, štititi autorska prava. No stručnjaci upozoravaju da će inspekcija kopanjem po podacima kršiti privatnost korisnika.
Dakle, prva nebuloza koju su napisali je ITU usvaja novi internet standard! Ajmo' sada svi zajedno: ITU ne usvaja Internet standarde! I ako nije jasno, predlažem da se ponovi to još nekoliko puta, a novinaru predlažem da to i napiše nekoliko puta! Internet standarde može jedino predložiti IETF i odobriti RFC Editor, i nitko više!

Drugi gaf je još bolji: "Osnovat će se posebna inspekcija DPI...". Ovdje sam se odvalio od smijeha! Dobro, ajde, prvo nisam mogao vjerovati da su to napisali, a onda sam se odvalio od smijeha! Naime, DPI znači Deep Packet Inspection i označava pregledavanje svega što paketi nose. Nekakav Hrvatski prijevod bio bi "Dubinska analiza paketa", "Detaljna analiza paketa", ili tako nešto. Sam postupak ne zvuči kao nešto posebno, međutim, potrebno je poznavati neke osnove računalnih mreža kako bi se shvatilo koliko to odudara od standardnog postupanja s paketima tijekom njihova prijenosa kroz mrežu. Konačno, i najbitnije za ovaj post, radi se o tehnici ili metodi, kako god hoćete, a ne o nekakvom tijelu koje se osniva. Dakle, novinar je englesku riječ "inspection" shvatio u stilu Državnog inspektorata/inspekcije, a ne kao opću riječ "ispitivanje/provjera". Konačno, to je potvrdio i zadnjom rečenicom: "No stručnjaci upozoravaju da će inspekcija kopanjem po podacima kršiti privatnost korisnika!" Hahahaha... da mi je samo vidjeti tu inspekciju koja bi bila u stanju pratiti sav promet na internetu!

Ok, odlučih onda kliknuti na stranicu Večernjeg lista da provjerim što su oni napisali. Dakle, i oni lupetaju o "standardu za Internet!". Mislim, taj izraz je toliko besmislen da i uz najbolju volju ne mogu doći do nekog smislenog objašnjenja! Nadalje, i oni izgleda kao i Monitor (ili je obratno) tretiraju da se radi o nekakvoj inspekciji, kako drugačije protumačiti sljedeću rečenicu: "Iz UN-a tvrde da će uvođenje inspekcije zvane DPI..".

E da, potom je tu izjava tipa: Britanski računalni stručnjak kojega se naziva i "ocem interneta" Tim Berners-Lee... Ajmo mi sada jednu stvar utvrditi: Internet postoji od cca. 70-tih godina prošlog stoljeća, a dotični gospodin je smislio Web 1992. Dakle, kako on može biti "otac interneta" ako je internet postojao prije njega jedno 20-tak godina?

Ok, potom novinar nastavlja s:
To je ozbiljno kršenje privatnosti. Netko provede široku inspekciju na vašem priključku te pročita sve podatke i sve internetske stranice, pohrani ih na vaše ime te može dati adresu i telefonski broj Vladi kada ga pitaju u vezi s prodajom najboljem ponuđaču – rekao je Berners-Lee.
I za parsiranje ovoga mi je trebalo, i nisam uspio! Prvo, što znači "široku inspekciju"? I ako postoji "široka inspekcija", postoji li "uska inspekcija"? I koja bi bila razlika? Dalje kaže da netko pročita "sve podatke" i "sve internetske stranice", a ja ne mogu shvatiti po čemu su "sve internetske stranice" različite od podataka? Ali dobro, ajmo reć da se htjelo istaknuti zbog širokih narodnih masa da su tu i internetske stranice uključene. No, onda slijedi prava nebuloza "...pohrani ih na vaše ime..." koju također ne mogu nikako shvatiti, i totalno besmisleni šlag na kraju "...te može dati adresu i telefonski broj Vladi kada ga pitaju u vezi s prodajom najboljem ponuđaču".

Potom malo dramaturgije u vidu naslova "SAD gubi kontrolu nad internetom". I nastavak u revijalnom tonu:
Glavni tajnik ITU-a odbacio je kritike te dodao da takav prijedlog ne predstavlja "prijetnju slobodi govora". – Ovo nam je šansa da nacrtamo svjetsku kartu i spojimo ono što trenutačno nije spojeno dok osiguravamo da je to investicija za kreiranje infrastrukture potrebne za eksponencijalni rast u glasovnom, video i prometu podacima – kazao je Hamadoun I. Toure te dodao da je "zlatna prilika da se omogući dostupan pristup internetu za sve, uključujući i milijarde ljudi diljem svijeta koji danas ne mogu na internet".
Ovo prvo što sam označio masnim slovima nikako da shvatim. No, u ovom paragrafu vjerojatno ima istine u smislu da je to Toure izjavio jer vjerujem da je ovo zadnje otisnuto masnim slovima mogao izgovoriti neki birokrat iz ITU-a.

Ostale postove iz ove "serije" možete pronaći ovdje.

Wednesday, December 5, 2012

Privjesci, pokloni, GPS, Rumunji i ostale budalaštine...

Dakle, već drugi put čujem nekoga kako spominje da je primio elektroničku poruku sljedećeg sadržaja:
Ovih dana na mnogim mjestima- benzinskim pumpama ili parkiralištima dijele se besplatno privjesci za auto ili mali nakit za vas auto.

Te stvari ne uzimajte! U tome je ugrađen čip- kriminalci vas prate sa benzinskih crpki sve do kuće i na taj način prate kada ste kod kuće. Kada ste odsutni koriste priliku da provale u vaš stan/kuću.

Radi se o rumunjskim kriminalcima koji su izmislili novi način provale.

Obavijestite svoje prijatelje!
Ako želite mišljenje, kratko rečeno: NE NASJEDAJTE NA OVE GLUPOSTI! Jednostavno izbrišite tu poruku i nastavite kao da se ništa nije desilo.

Ok, a sad malo opširnije. Ovdje se radi o bezveznom ulančanom pismu. Drugim riječima, ta poruka se nekontrolirano širi i pri tome zbunjuje ljude (zbog čega se i širi), a istovremeno troši mrežne resurse i vrijeme onih koji ju primaju. Dakle, šteta je indirektna, ali nije zanemariva.

Ali, evo argumenata koji dovode u pitanje ovu konkretnu poruku:
  1. Prvo i osnovno, MUP ili neka nadležna institucija treba izdati priopćenje, i to ne na bilo kakav način, kao što je primjerice slanje elektroničke poruke poznanicima, već na nekom mjestu koje bi se moglo tretirati kao "službeno". Primjerice, priopćenje za javnost, Web stranice policije, ili nešto slično.
  2. Čim u nekoj poruci vidim nastavak "... obavijestite sve svoje prijatelje jer" i onda nekakva prijetnja da ćete patiti do kraja života, još sam uvjereniji da je lažna obavijest.
  3. Kako uopće kriminalci znaju što ćete napravili s privjeskom? Ili alternativno, kako znaju da ga nosite i/ili ga niste poklonili nekome drugome i/ili negdje zametnuli ili jednostavno bacili?
  4. Zatim, zašto bi vas pratili s GPS-om, kad vas mogu pratiti i ovako?
  5. Kako znaju da ste na putu doma? Kako znaju kad stanete da li vam je to doma? I da, ako netko kaže da znaju  da ste doma jer su, primjerice, koordinate na kojima ste stali u nekoj zgradi, ja mogu odgovoriti s protupitanjem: kako uopće znaju na temelju koordinate da je to vaš dom? I tu je još dodatno pitanje, ako je koordinata na mjestu velike - gusto naseljene zgrade - kako znaju koji je stan u pitanju?
  6. Ako netko ima namjeru provaliti i pokrasti, što nije bolje upikirati kvart/kuću/stan na temelju nekih drugih parametara (recimo, koliko je prometno, zaklonjeno od prolaznika i slično), a ne hvatati nekog slučajnog prolaznika?
  7. I da, što ako živite 100km ili više dalje od benzinske gdje vam daju privjesak, a eto slučajno ste se zatekli tu u prolaz? Mislim, na temelju čega znaju gdje otprilike živite kako bi znali da li ste blizu? Registracija ZG ne govori puno, a ZG je velik! I pogotovo kako bi to znali stranci, u ovom slučaju rumunji?
  8. I otkud znaju da doma nemate čopor djece, djeda, babu, nezaposlenu ženu (muža) ne znam koga tko je stalno doma?
  9. I koliko uopće košta ta oprema koju bi vam dali? I u stvari, putem čega taj "daljinski" šalje signal i kako daleko? Više od par km sigurno ne može "dobaciti" pa kako će ga oni hvatati? Jel' tako što će vas slijediti? Zašto su vam onda to dali ako vas slijede? Mislite da šalju UTMS-om? Pa to znači da moraju kupiti Tele2/Simpa paket za svaki taj odašiljač. Uglavnom, ovo razmišljanje ne vodi nikuda.
  10. Što uopće znači "novi način provaljivanja"? Da li vam možda stvari odnose teleportacijom? To bi bilo nešto novo...
  11. Brzi google prve rečenice pokazuje da je dotični mail - u identičnom obliku - raširen na sve strane po cijeloj ex. Yu, ali, nigdje ne piše da je netko to doista i vidio, o slici neću ni pričati! I niti jedan od tih "hitova" nije nešto što bi se bar približno moglo nazvati "službenim".
Uglavnom, mislim da više ne treba o tome uopće razmišljati, a kamo li raspravljati...

Friday, November 30, 2012

Internet Freedom - Well done EU!

If you think that Internet brought revolution only to individuals (and maybe different businesses that market themselves over the Internet)  than you miss one important link, telecommunication companies. Before the Internet they were in charge of everything related to communication and they did whatever they wanted, in the supposed name of the customers. If they thought that something isn't good, then no matter what people wanted, they weren't getting it. And we shell not forget pricing, which generated huge revenues. But, after the tremendous success of the Internet, things drastically changed. For some of the underlying reasons you can read in my other post, but the key is that the control was given to users, not network (i.e. telecoms). Now, telecoms are what they should be: data carriers only.

All good, but the problem is that there are no huge profits in data transfer, at least not as it used to be and telecoms don't just sit and wait. And so, every now and then we hear of some brilliant idea coming from telecommunication industry by which they either try to bring back good old days, or they try to offer something that doesn't make sense. Just in case you didn't know, ATM was one such idea that, fortunately  was a big failure! Even more interesting is a comment on this blog post from a guy (or guys) that are trying to reimplement some protocols from mobile telephony. They criticize specifications produced by telecoms (and related industry) for introducing new things, not because they are necessary, but because they are patented and in that way allow manipulation!

But, these days there is one other "very interesting" idea. Probably not many people know that ITU is trying to introduce mechanisms in order to regulate the Internet. Fortunately, EU isn't approving that, along with US. I approve that wholeheartedly  and I can not describe how outraged I am when I think about telecoms and ITU!

But, it is probably enough to point who is proposing regulation and to be clear what real motives are. Also interesting are requirements by some countries that Google and other Internet providers would have to pay to them to be allowed to distribute content to their citizens. This is absurd, because who forces users to access Google?

And ITU is also something I really dislike, a lot! It is a bureaucratic institution that produces standards for telecommunications. It's a dinosaur of the past. If you, as a single person, want to propose something, or just take part in some activity, you first have to be member of some member state standardization body, which isn't free. Then, you have to be delegated as a representative to ITU, and only then you can take part in some activity. And now we come to the best part, specifications that were produced common purpose were quite pricey. Truth to be told, they are now distributing specifications free of charge, but if it weren't the Internet, we would still have to pay for them. Contrast that to IETF, where membership and participation is open to everyone who wants to participate. Also, all the specifications produced by IETF are available for free to anyone. Now, I'm not claiming that IETF is perfect, but I certainly do claim that IETF is much better than ITU.

And while I'm at ITU/IETF, it happened to me several years ago that I called our Ministry in order to ask for funding to visit IETF. Apparently, this particular Ministry was willing to do that, or so it was written on their Web pages. The only caveat was that it didn't include IETF for a simple reason it isn't so bureaucratic as ITU. To cut the story short, bureaucrat I talked with didn't understand what I was talking about, nor he was interested to find out. And it ended without a grant...

Thursday, November 29, 2012

Few notes about sslstrip tool...

I decided to test sslstrip tool. The idea was that I'll use it to demonstrate to users that they should take a note if there is https when they are accessing some site where they have to type password or some other sensitive data. To create test network I used Windows 7 running within VMware Workstation and using iptables I redirected traffic from virtual machine to local port 80 where I started sslstrip tool. But, no matter what I did, it didn't work. It seems that when VMWare is used iptables redirection  doesn't work as expected. In other words, it seems that netfilter hooks aren't placed within vmware network stack.

I managed to get around that issue by modifying hosts file within Windows. Namely, you should open file C:\Windows\System32\drivers\etc\hosts and add the following line there:
192.168.x.1     www.facebook.com facebook.com
The exact IP address is the one assigned to vmnet8 interface on host operating system. Now start Firefox as usual and type in the URL bar:
http://www.facebook.com
Note that I'm explicitely telling Firefox to use http, not https. Anyway, after I did it this way everything worked as expected.

The next "problem" you migh have is that no matter what you do, the site you access automatically switches to https. The reason is HSTS. It is used by server to inform Web browser that it should be accessed only through SSL connections. For this reason sslstrip doesn't work with sites that use HSTS, like Google  and Twitter. But, it doesn't mean that those sites are completely protected. If the client is accessing those sites for the first time or the client never used https to access them, then HSTS can be prevented. The point is that HSTS information is transferred only via https connection. Anyway, to get around this clear history (i.e. go Tools then Clear Recent History... and select to clear everything).

And, for the end, I don't think that it is necessary to enable forwarding in the Linux kernel in order for sslstrip to work, i.e. the following command is unnecessary:
echo 1 > /proc/sys/net/ipv4/ip_forward
Namely, the kernel isn't doing forwarding of IP packets in order for this to work. sslstrip acts as a proxy and thus kernel isn't doing any relaying. But, in case you are diverting only a part of the traffic, e.g. only HTTP, and the kernel is handling the rest, i.e. DNS, then forwarding is necessary in the kernel.

Friday, November 23, 2012

Zimlets for managing posix & samba attributes...

Well, this isn't actually new news, but nevertheless I managed to avoid it for some time now. Namely, Zimbra, with upgrade to 7.2, removed plugins that are used to manage Samba and Posix accounts in its LDAP directory. Now, whenever someone asked what about this Zimlets, the answer was "This was never supported by Zimbra and thus someone from community has to step in." If you Google a bit, you'll easily find it, e.g. here or here. Now, this probably is a perfectly reasonable answer from the Zimbra's standpoint, but I believe that Zimbra should know that this plugin was more frequently used one (let me guess it: because it's useful?) and they have to listen their users.

But whatever is/was with those plugins, I had to have them because one of my setups is such that it contains all account databases in Zimbra. When I disabled Samba and Posix Zimlets, everything worked as usual, apart that I was unable to add new users via Web interface. After I managed to get with it for some time now, the time has come that I had to add another user account and I had to see what I'm going to do with non-working Web interface.

After some googling I discovered that someone managed to fix those two plugins, and also at the time this post was written, there are no news if those plugins work with version 8. So, in short, don't upgrade yet if you are using those plugins. To see what was changed in the plugins to make them work, take a look at this post. In any case, go to the Zimbra's gallery and download Posix and Samba Zimlets. The versions I used are 28.5.12 - v6.1 for both Zimlets. Now, before installing open the archives and in each one you'll find config_template.xml files. Open those files in text editor and fill in the correct values. The most important one is LDAP suffix which is by default set to dc=domain,dc=tld and which you should change to reflect your domain. For example, if your domain is example.com then the suffix will be dc=example,dc=com. After you've made changes, save files and put them back in the archive. If you don't do that you'll receive error reports when logging in admin console, and also there will be no existing samba and possix groups. Not to mention that you'll be unable to create new accounts.

Ok, the last step is to undeploy the old versions - in the case you didn't already, and deploy new ones. After deploying, you should log out and then back in and you should see their options under the Configuration section (in the left pane). If you click on, e.g. Manage Samba Groups, you'll see your existing Samba groups. Similarly has to be with the option Posix Groups. If there are no groups (and you know there should be) than you probably messed with LDAP suffixes I was talking about.

And that's it. For the end, if someone from Zimbra is reading this post, then I have a message for you. Namely, don't answer to so many people that you don't support something because it was always unsupported. I don't think it's relevant. If people are using it, and they find it useful, then you should support it. Or at least devote one engineer day to fix the problem. It isn't so expensive and people will have much better opinion about Zimbra.

Friday, November 16, 2012

End2end design principle, middleboxes and a bit about TCP...

I was just watching guest lecture given by Jana Iyengar to the students of CS144 course on Stanford. In his lecture he talks about e2e design principle, and the rise of middleboxes. He then goes on to conclude that middleboxes (especially NATs) are a problem of today's Internet. And I couldn't agree more! It's known fact that Internet was built in such a way that the network is dumb, while end nodes are smart. When I say smart, it means that functionality is placed within smart part of the whole system, while when I say dumb, it means it only performs one simple function. In the case of the Internet, network only moves data from one end to the other, and almost nothing else. This design principle was a key feature that allowed Internet to evolve to today's form and become ubiquitous network. To better illustrate this point, contrast Internet with telephone network. Telephone network was built so that the network itself is smart, while the end nodes (telephones!) are dumb. There is a nice illustration of this difference I saw once I liked very much. Here is the reconstruction of the illustration I saw:


Now, when you wanted something new from your telephone, which included telephone network, you had to wait your telephone company to introduce the service in the network, and only after that you could use it. Contrast that to the Internet, you just had to install a server/service on you computer and whoever wanted to access it had to install client on its machine. And that's it, no changes necessary in the network. Actually, the network doesn't know, neither care, what you are doing and everything works. There is a great example of this: the Web. The Web wouldn't succeed if Tim Berners-Lee had to wait for telephone companies to do something. And since the Internet is popular thanks to the Web, the Internet itself wouldn't succeed. Note that the telephone network functions in such a way because of a historical reasons. But, the telephone provides don't have incentive to change that. When/if they control network, they have revenues. The moment they only transfer data, the revenues are with someone else. And that's the case today with content providers (Google, Yahoo, Microsoft, Youtube, ...) and ISPs.

There is also one additional reason to design network by pushing  functionality to the edge and that's for scalability reasons. I think that it's quite obvious that the simpler something is, the bigger it can be, and it's easier to increase size, so I won't argue this any more.

What is happening now, and for some time, is that NATs are proliferating throughout the network. And since NATs heavily inspect the packets that pass through them and they depend on knowing higher layer protocols, it means they have to have built-in knowledge of higher layer protocols. What that means in turn is that if you are introducing a new service, you have to have support built into NATs. And there are two problems there. First, abundance of installed NATs that can not be changed, and second, bugs within those devices. So, in essence, we are approaching the way telephone networks work. Of course, there are other problems with NATs, but this one is a huge one!

Jana Iyengar then talks about SCTP, and the fact that this protocol exists from 2000 and it still didn't manage to take some ground. And middleboxes, more specifically NATs, are to blame. They pass TCP, fiddling with it, but nothing else. So, one of the things he was doing is using TCP as a communication substrate. In other words, he relied on TCP being passed through middleboxes, and then he went to built protocol on top of it. This protocol then could be used to build another protocols that will work across middleboxes. The modification that they did to TCP allowed it to deliver out-of-order data which they termed as uTCP, unreliable TCP. And, it seems no-one thought of that before.

But, I have to say that in 2011. I worked with a student and we were trying to introduce certain QoS parameters into TCP. The motivation were streaming services. As a part of this we allowed TCP to be unreliable, i.e. it could drop data in order to meet other QoS parameters. The work is described in diploma thesis available online, unfortunately, only in Croatian. But, I intend now to rewrite it into a paper as it was an interesting experiment...

Tuesday, November 13, 2012

Biseri naših neukih novinara 7...

Dakle, sve je počelo s ovim člankom u Poslovnom. Tu me je već izbacilo iz takta stigmatiziranje akademske zajednice korištenjem izraza tipa "uhljebljene akademske ljenčine". Nakon toga nisam ni htio više čitati članak jer čim netko počne s takvim bombastičnim izjavama ne smatram ga/ju više relevantnim. Međutim, onda sam vidio na Facebook-u da je tu vijest prenio i Večernji. Ali Večernji je to podigao na potpuno novu razinu zbog koje je zaslužio da se taj članak, i njegov autor, spomenu u mojoj rubrici novinarskih bisera!

Naime, u naslovu je izjava da je netko "izumio navigacijsku aplikaciju"?! Kakva je to izjava!?!?! Da li je novinar svjestan značenja riječi izumjeti? Možda i je, ali u tom slučaju vjerojatno je živio u nekakvoj rupi i nije uopće svjestan događaja oko sebe, pa navigacijske aplikacije postoje od kada je GPS-a, a vjerojatno i prije toga. Neću više, samo bi volio da mi netko kaže što njegova aplikacija radi što ne radi već N drugih aplikacija (pri čemu je N vrlo velik broj)...

Ostale postove iz ove "serije" možete pronaći ovdje.

Monday, November 12, 2012

Do you want to connect to IPv6 Internet in a minute or so?

Well, I just learned a very quick way to connect to IPv6 Internet that really works! That is, if you have Fedora 17, but probably for other distributions it is equally easy. Here are two commands to execute that will enable IPv6 network connectivity to you personal computer:
yum -y install gogoc
systemctl start gogoc.service
First command installs package gogoc, while the second one starts it. Next time you'll need only start command. After the start command apparently nothing will happen but in a minute or so you'll have working IPv6 connection. Check it out:
# ping6 www.google.com
PING www.google.com(muc03s02-in-x13.1e100.net) 56 data bytes
64 bytes from muc03s02-in-x13.1e100.net: icmp_seq=1 ttl=54 time=54.0 ms
64 bytes from muc03s02-in-x13.1e100.net: icmp_seq=2 ttl=54 time=55.0 ms
^C
--- www.google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 54.023/54.551/55.080/0.577 ms
As you can see, Google is reachable on IPv6 addresses. You can also try traceroute6:
# traceroute6 www.google.com
traceroute to www.google.com (2a00:1450:4016:801::1011), 30 hops max, 80 byte packets
1 2001:5c0:1400:a::722 (2001:5c0:1400:a::722) 40.097 ms 42.346 ms 45.937 ms
2 ve8.ipv6.colo-rx4.eweka.nl (2001:4de0:1000:a22::1) 47.548 ms 49.498 ms 51.760 ms
3 9-1.ipv6.r2.am.hwng.net (2001:4de0:a::1) 55.613 ms 56.808 ms 60.062 ms
4 2-1.ipv6.r3.am.hwng.net (2001:4de0:1000:34::1) 62.570 ms 65.224 ms 66.864 ms
5 1-3.ipv6.r5.am.hwng.net (2001:4de0:1000:38::2) 72.339 ms 74.596 ms 77.970 ms
6 amsix-router.google.com (2001:7f8:1::a501:5169:1) 80.598 ms 38.902 ms 39.548 ms
7 2001:4860::1:0:4b3 (2001:4860::1:0:4b3) 41.833 ms 2001:4860::1:0:8 (2001:4860::1:0:8) 46.500 ms 2001:4860::1:0:4b3 (2001:4860::1:0:4b3) 48.142 ms
8 2001:4860::8:0:2db0 (2001:4860::8:0:2db0) 51.250 ms 54.204 ms 57.569 ms
9 2001:4860::8:0:3016 (2001:4860::8:0:3016) 64.727 ms 67.339 ms 69.540 ms
10 2001:4860::1:0:336d (2001:4860::1:0:336d) 80.203 ms 82.302 ms 85.290 ms
11 2001:4860:0:1::537 (2001:4860:0:1::537) 87.769 ms 91.180 ms 92.931 ms
12 2a00:1450:8000:1f::c (2a00:1450:8000:1f::c) 61.213 ms 54.156 ms 55.931 ms
It simply can not be easier that that. Using ip command you can check address you were given:
# ip -6 addr sh
1: lo: mtu 16436
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: wlan0: mtu 1500 qlen 1000
    inet6 fe80::f27b:cbff:fe9f:a33b/64 scope link
       valid_lft forever preferred_lft forever
5: tun: mtu 1280 qlen 500
    inet6 2001:5c0:1400:a::723/128 scope global
       valid_lft forever preferred_lft forever
And also routes:
# ip -6 ro sh
2001:5c0:1400:a::722 via 2001:5c0:1400:a::722 dev tun  metric 0
    cache
2001:5c0:1400:a::723 dev tun  proto kernel  metric 256  mtu 1280
2a00:1450:4008:c01::bf via 2a00:1450:4008:c01::bf dev tun  metric 0
    cache
2a00:1450:400d:803::1005 via 2a00:1450:400d:803::1005 dev tun  metric 0
    cache
2a00:1450:4013:c00::78 via 2a00:1450:4013:c00::78 dev tun  metric 0
    cache
2a03:2880:2110:cf01:face:b00c:: via 2a03:2880:2110:cf01:face:b00c:: dev tun  metric 0
    cache
2000::/3 dev tun  metric 1
unreachable fe80::/64 dev lo  proto kernel  metric 256  error -101
fe80::/64 dev vmnet1  proto kernel  metric 256
fe80::/64 dev vmnet8  proto kernel  metric 256
fe80::/64 dev wlan0  proto kernel  metric 256
fe80::/64 dev tun  proto kernel  metric 256
default dev tun  metric 1
Probably I don't have to mention that if you open Google in a Web browser you'll be using IPv6. :) In case you don't believe me, try using tcpdump (or wireshark) on tun interface.
You can stop IPv6 network by issuing the following command:
systemctl stop gogoc.service
If you try ping6 and traceroute6 commands after that, you'll receive Network unreachable messages, meaning Google servers can not be reached via their IPv6 address.

Sunday, November 11, 2012

Using IEEEtran latex style for language other than English

I had to write a paper in Croatian and I decided to use IEEEtran latex style. The problem was that it uses, and outputs, English words as a default language. That means, for example, that it is outputting words Abstract, Keywords, etc. which don't fit my local language. I managed to solve that problem fairly easy, but I wasn't able to Google anything about this. So, I decided to write how I did it in case someone else needs this too. There are two things you have to translate. The first one is IEEEtran itself, while the other is bibliographic style.

To translate words that appear in your text after the article is generated you have to insert the following lines somewhere at the beginning of the document, but certainly after you've included IEEEtran style:
\def\abstractname{Sažetak}
\def\IEEEkeywordsname{Ključne riječi}
Note that in this case I'm redefining what latex will output as abstract title and keywords title. Basically, whatever appears in your document in English you can translate by first searching for that words in IEEEtrans.cls and looking what associated keyword name is (e.g. abstractnameIEEEkeywordsname). Finally, you add \def commands similar to those I showed above. Note that you first have to find where IEEEtrans.cls is. On Fedora, if you used package tetex-IEEEtran, then it is in directory /usr/share/texmf/tex/latex/IEEEtran/.

Bibliography style, on the other hand, has to be localized separately and in a special way. First, you have to enter special reference in one of your bibliography databases, or, better yet, you can create a separate database just for that purpose. This bibliography entry has to have the following format:
@IEEEtranBSTCTL{BSTcontrol,
  CTLname_url_prefix = "[Online]. Raspoloživ:"
}
In this case I'm changing default output for URL from "[Online]. Available: " to something more Croatian like (though not entirely, there is no equivalent word for Online in Croatian). Then, somewhere within your document you are writing, you have to cite this reference with a special cite command:
\bstctlcite{IEEEexample:BSTcontrol}
Now, if you make dvi/ps/pdf version of your document you'll see that the text really changed (in case you use entry that has url field defined). You can find some details in IEEEtran's bibliography manual. To find out what exactly I have to change, I was searching through IEEEtrans.bst file which is in directory /usr/share/texmf/bibtex/bst/IEEEtran/ (again, if you use Fedora's package).

Thursday, November 8, 2012

Installing certificate for Alfresco...

This post is continuation of the post about installing Alfresco using native Tomcat6 installation (on CentOS6). If you followed steps given in that post, you have running Alfresco installation but Tomcat uses self-signed certificate.

To install your own certificate first obtain it (you can use your own, self managed, CA or you can buy commercial one), then install it on your Tomcat instance. You'll find a lot of information about this in SSL Howto on Tomcat's Web pages, but that page assumes that everything you do, you are doing using keytool.

Here is a quick Howto with an assumption that you have files newcert.pem (containing certificate), newkey.pem (containing private key) and cacert.pem (your CA certificate). By default, tomcat's keystore is in its home (/usr/share/tomcat6) and it is named .keystore. Keystore file is password protected and default password for it is changeit. Note that the period isn't part of password! I suggest that you copy this file to root's home under the name keystore (note no leading dot!) or whatever else you wish so that you can restore old copy in case something goes wrong with the following steps.

The installation is two step process. First, you create keystore containing you certificate, private key and CA's certificate. In second step, you import that information to Tomcat's keystore.

First step is to pack certificate for Alfresco, its private key and CA's certificate into PKCS12 store using openssl tool as follows:
$ openssl pkcs12 -export \
        -in newcert.pem -inkey newkey.pem \
        -out mycert.p12 -name tomcat \
        -CAfile cacert.pem -caname root -chain
Enter Export Password:
Verifying - Enter Export Password:
This command assumes that all necessary files (newcert.pem, newkey.pem and cacert.pem) are in you current directory. Output of the command is also stored into current directory. Note that you are asked for password that will protect all the data. Enter something or later you'll see the following warning:
*****************  WARNING WARNING WARNING  *****************
* The integrity of the information stored in the srckeystore*
* has NOT been verified!  In order to verify its integrity, *
* you must provide the srckeystore password.                *
*****************  WARNING WARNING WARNING  *****************
And then you'll receive the following error:
keytool error: java.security.UnrecoverableKeyException: Get Key failed: / by zero
Second step is to import this pkcs12 file to tomcat's keystore using keytool as follows:
$ keytool -importkeystore -srckeystore mycert.p12 \
        -srcstoretype pkcs12 -destkeystore /usr/share/tomcat6/.keystore
Enter destination keystore password:
Enter source keystore password:
Existing entry alias tomcat exists, overwrite? [no]:  yes
Entry for alias tomcat successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled
Again, input file is in the current directory and you are importing directly into tomcat's keystore. Note that the existing certificate with the alias tomcat will be removed and you are asked to confirm that! The default alias Tomcat searches when it start is called tomcat.

Third step is to change private key's password that has to be the same as for the keystore. Do that using the following command:
keytool -keypasswd -alias tomcat -new <keypassword> -keystore /usr/share/tomcat6/.keystore
You'll be asked for the keystore's password and the password for the key will be set to keypassword.

And that's it. Restart tomcat and check if it is using new certificate.

Tuesday, November 6, 2012

Network troubleshooting...

Yesterday, I was giving a lecture to a third year students of computer science within a course Communication networks. Of course, that is not the only computing module on the Faculty, i.e. there are also modules computer engineering and software engineering, but other lecturers are giving lectures to them. Anyway, the topic of the lecture was Internet's networking layer and, among other things I was talking about autonomus systems, BGP routing protocol, and forwarding process. As a part of ICMP protocol, ping and traceroute commands were mentioned as an important addition to troubleshooting tool. I also mentioned several times to the students that ping and traceroute are the main troubleshooting tools of any network technician. I also told them that after they finish with this course they should be able to do basic troubleshooting and never ever again say something like "Internet isn't working"! Finally, I mentioned that there is a Routeviews project on the Internet that provides (read only) access to BGP routers that can be used to see routes exchanged on certain parts of the Internet.

So, the reason I'm writing this post is that I stumbled on a post Why Google Went Offline Today and a Bit about How the Internet Works which is highly recommended read for them, but also for anyone else learning about networking, with an emphasis on the Internet. They (students) should be able to follow and understand this post now after they learned basic terminology and mechanisms of the Internet layer. To fully understand it, they'll have to wait until we explain how DNS works.

Routeviews

Let me (mis)use this post to say more about Routeviews. Actually, there is also Looking Glass which is very similar, i.e. it allows to peek at certain points on the Internet, but it doesn't offer direct access to BGP routers so, its a bit less interesting, at least to me.

Before I continue with description of Routeviews let me state that Internet is highly irregular network, connecting autonomous systems, with the main irregularity coming from the peering relations between autonomous systems. This peering is largely kept confidential, and besides BGP routing is mainly driven by politics not by technology. What this means is that the real topology if the Internet is not known to anybody. And besides, it's a dynamic and constantly moving target. There are attempts of mapping Internet, but nevertheless they are only approximations.

So, how Internet looks depends on where are you looking from. The routeviews allow one to look on the Internet from different points and this is used for troubleshooting purposes, as well for research purposes. There are also historical data and that's very valuable information.

Anyway, if you go to Routeviews project you'll see a table with a list of DNS names. For each name there is additional information, how to access it (mainly telnet), what type of software/hardware it is running, and where it is. So, you can telnet to one of those routers, and using command show ip bgp determine BGP routing table of that router. Note that Cisco IOS (as well as Zebra/Quagga which are modelled after IOS) offer help command in the form of a question mark. At any point in the command you can type ? and the OS will show you what can you type at that point.

Here is a c/p from a session I made. It's obviously edited to be short and up to the point. First, I logged in to one of the routers:
$ telnet route-views.routeviews.org
Trying 128.223.51.103...
Connected to route-views.routeviews.org.
Escape character is '^]'.
 **********************************************************************
                    Oregon Exchange BGP Route Viewer
          route-views.oregon-ix.net / route-views.routeviews.org
 route views data is archived on http://archive.routeviews.org
 This hardware is part of a grant from Cisco Systems.
 Please contact help@routeviews.org if you have questions or
 comments about this service, its use, or if you might be able to
 contribute your view.
 This router has views of the full routing tables from several ASes.
 The list of ASes is documented under "Current Participants" on
 http://www.routeviews.org/.
                          **************
 route-views.routeviews.org is now using AAA for logins.  Login with
 username "rviews".  See http://routeviews.org/aaa.html
 **********************************************************************

User Access Verification
Username: rviews
route-views>
What I typed is in bold, and what I received is in ordinary text. Note that I used username rviews, as specified in a greeting message. Here is a bit of using help:
route-views>?
Exec commands:
  <1-99>           Session number to resume
  access-enable    Create a temporary Access-List entry
  access-profile   Apply user-profile to interface
  clear            Reset functions
  connect          Open a terminal connection
  crypto           Encryption related commands.
  disable          Turn off privileged commands
...
route-views>show ?
  aaa                   Show AAA values
  aal2                  Show commands for AAL2
  adjacency             Adjacent nodes
  alps                  Alps information
  appfw                 Application Firewall information
  aps                   APS information
  arp                   ARP table
  auto                  Show Automation Template
  backup                Backup status
  bfd                   BFD protocol info
  bgp                   BGP information
...
route-views>show i?
if-mgr  ima           inventory  ip
ipc     iphc-profile  ipv6    
Again, in bold is what I typed, and with three dots I marked output that was cutted not to clutter this post. Finally, here is part of the show bgp command output (note, show ip bgp is equivalent form):
route-views>show bgp
BGP table version is 3000708321, local router ID is 128.223.51.103
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

   Network     Next Hop         Metric LocPrf Weight Path
r> 0.0.0.0  208.74.64.40                        0 19214 12989 2828 i
*  1.0.0.0/24  217.75.96.60          0             0 16150 15169 i
*              154.11.98.225         0             0 852 15169 i
*              129.250.0.11          6             0 2914 3356 15169 i
*              4.69.184.193          0             0 3356 15169 i
*              194.85.102.33                       0 3277 15169 i
*              194.85.40.15                        0 3267 15169 i
*              193.0.0.56                          0 3333 3356 15169 i
*              209.124.176.223                     0 101 101 15169 i
*              216.218.252.164                     0 6939 15169 i
*              114.31.199.1          0             0 4826 15169 i
*              207.172.6.20          0             0 6079 15169 i
*              208.51.134.254        1             0 3549 15169 i
...
Let me give you a short description of what you see here. First line is some general data (table version, and router's ID). Then there are status codes and origin code that are used later. Finally, the BGP table is dumped (and piped through more, use q to quit).

What you see in BGP table is:
  1. Network destination, i.e. some network on the Internet that is reachable from that particular BGP router who's tables you are examining. Note that the multiple lines belong to a single destination network and in that case, network isn't repeated. This is the case for network 1.0.0.0/24 in the previous output.
  2. Each line in the output is one possible path to reach given network. Each path consists of a next hop (second column), and exact path (last column with numbers). The path that was found to be the best is marked with greater-then (>) in the first column (also, note the legend in the beginning of the output).
  3. Exact path is a sequence of autonomous systems through which destination network is reachable. Note that each path ends with a same number, i.e. same autonomous system number. That's because given network belongs to that autonomous system.
  4. At the end of each path there is letter that informs us from where this route was obtained. In this case i means it came from the peering BGP router in the same autonomous system as the router we are examining.
Finally, to exit use exit command. :) And that's it. I suppose you can play and research for yourself from this point on...

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive