Personal blog, mainly talking about evil wares and other interesting topics
[23/02/20]
So the main dll that needs to be checked here is BalsamiqWireframesCommon.dll. It is located in C:\Users\{username}\AppData\Local\Programs\Balsamiq\Balsamiq Wireframes 4\
After snooping around a bit using the "strings" utility from linux and searching some stuff with dotPeek I started creating a server with python that could respond to the requests from CheckDesktopKey and RegisterDesktopKey. This ended up not working because my JSON responses did not fit into what the program understood for some reason. So I decided to try other ways.
Instead of trying to change the response of the server I could just modify how the programs interprets this response. So I kept looking both on the common dll and on the exe and ended up on a function called getLicenseStatus inside LocalFilesRegistration (in the dll). In this function there are 3 return statements that can easily be patched to always return "ok". This is the only thing we need to get the program to behave as if we had a key.
To apply the crack you just need to drag and drop the dll into the Balsamiq Wireframes 4 folder (mentioned before).
Download links:
[21/10/19]
Unlike my other posts on the blog, this is going to be more of a tutorial than a rant or whatever I usually write. I’ve been working on a high performance MySQL cluster that needed to be as fast and reliable as possible and this setup is what I’m going to talk about.
For the setup of a MySQL cluster the easiest solution is to use the gui that they provide on their website. You can use the windows version or the linux one. Both of them can be found on the downloads page inside a zip or tar archive, depending on your system. Inside you will find the python tool which requires a couple of modules to be installed before using it.
When running the automatic installation you will get to a point where the config has been generated already and you are able to start the cluster. Before doing that, you have to install the following binaries:
All these are inside the tar archive I mentioned before. You just have to move them to the location you specified in the gui. The default is /usr/local/bin. When you are done you should be able to click on start and everything should go smoothly.
My setup consisted of four nodes. Two of them are management nodes and the other two are data nodes. This was automatically decided by the gui and if it’s good enough for the auto-installer it’s good enough for me. What we are looking for now is a high availability load balancer.
A load balancer is a piece of software (or hardware sometimes) that acts as a proxy between the user and your servers. It redirects the user’s requests to one of your servers and makes sure that the load is well balanced between them. This can be easily done by using tools like HAProxy or NGINX. I decided to install NGINX because it’s very easy to configure. You just have to write something like this inside /etc/nginx/nginx.conf.
stream {
upstream stream_backend {
server 192.168.1.50:8081;
server 192.168.1.51:8081;
}
server {
listen 80;
proxy_pass stream_backend;
}
}
I installed NGINX on both management servers, that way I don’t need any extra VMs to make the load balancing work. For that to work I needed to change the servers port so that it won’t conflic with the load balancer one. This covers the load balancing part, what about the high availability?
So inside the two management nodes I have the following:
Now we need to tackle the high availability. This just means we are going to make the management nodes redundant. We want to create a shared (virtual) ip that will send you to either management node. For this purpose there's multiple tools available like Heartbeat, Pacemaker or Keepalived. I’ll just use Keepalived because reasons. After installing Keepalived in both management nodes you’ll want to configure it with something similar to the following for the master node:
vrrp_instance VI_1 {
interface enp0s3
state MASTER
virtual_router_id 50
unicast_src_ip 192.168.1.50
unicast_peer {
192.168.1.51
}
priority 101
virtual_ipaddress {
192.168.1.99/24 dev enp0s3
}
}
And this for the backup node:
vrrp_instance VI_1 {
interface enp0s3
state BACKUP
virtual_router_id 50
unicast_src_ip 192.168.1.50
unicast_peer {
192.168.1.51
}
priority 100
virtual_ipaddress {
192.168.1.99/24 dev enp0s3
}
}
In the example, 192.168.1.50 is the master node and 192.168.1.51 is the backup one. Once this is setup and you’ve restarted NGINX and Keepalived with systemctl restart, everything should be working ok. You should probably add check scripts to Keepalived so that it knows when a node is down. In my example configuration the virtual ip is 192.168.1.99. From there you can access your services as if they were one, even if one of your management nodes is down.
[25/07/19]
Back at it again, now with some more basic albeit also more boring stuff. I want to talk about data structures because that’s the problem I have faced this week.Interesting links:
[17/07/19]
Writing software and writing about writing software.
For the remote administration program I'm working on right now, I have to learn about a lot of things: p2p networks and cryptography for communication, X11 and Wayland for interaction with the remote computer and Linux in general.
Last post I rambled about my ideas on how to build Liu's p2p network. Now I want to talk a bit about the crypto I'm going to use to avoid being MITMed.
My first idea was pretty simple. I would create an RSA keypair and keep the private key on the server and hardcode the public one on every client. RSA can only encrypt a bit less than the length of the key, so if my key was 2048 bits I would have a little less than 256 bytes for my packet. I decided that I would better use a 4096 bit key so that I had a bit more space to send data to the server. With the public key the clients have, they could send multiple packets encrypted and only the server would be able to decrypt them with the secret key.
Even though this is possible and sounded ok for small messages, I decided I would need more space to send logs and other files. After reading more about RSA I also discovered it's a pretty bad idea to use raw RSA for encryption because reasons. So I decided to mix RSA (asymmetric encryption) with AES (symmetric encryption). My idea was to use it the following way:
This is much better than the first option, because we can now encrypt as much data as we want without worrying about splitting the files. We can also encrypt big files much faster because RSA is pretty slow. This technique is also what we use in Liu's encryption plugin to securely encrypt all files in a given computer. The plugin encrypts the files with AES and then sends the AES key encrypted with RSA to the server, removing then the AES from the client's computer. This way we can remotely defend our data in case of an attack.
Even though the protocol seems good, it still lacks more security. The messages sent from the client to the server are encrypted but the ones the server sends to the client are only signed. When you sign a message, you don't encrypt it. You can verify the message was not tampered with in the middle of your connection but people can read whatever you send to clients. So for the protocol to take care of this I need the client to generate another RSA keypair. This time, the client will keep the private and send the public key to the server. This way, both sides of the connection can encrypt the messages with the other's public key and keep the connection secure.
So we can now add as step zero, the exchange of keys, where the client sends the key to the server and the server saves it with some other info needed to connect with the client like the IP and the port where the client listens for instructions.
I want to end the post by saying all this is a very bad idea, you should never roll your own protocol or your own crypto unless you really know what you are doing (and I do not). If you want to use UDP like me and need secure communication you should use DTLS. It's like SSL or TLS but instead of TCP it uses UDP.
Interesting links:
[15/07/19]
While writing a p2p communication protocol for my current project, I was faced with the problem of discovery of peers and their authenticity.
The first security measure is to sign every client or peer that is deployed to the network. The
server
has a RSA keypair. It uses the private key to
sign the IP of the client being installed. The public key of the server is also given to the client for
verification of the server instructions.
This seems like a good idea because it prevents a third party to infiltrate the network unless the server
infects the attackers computer. We also make sure that the instructions from the server are from the server.
To allow clients to replicate and install themselves on other computers, we need a way of asking the server for a certificate for a given IP. For this to work, the server has to verify that the client asking for a new certificate is already part of the network. This is problematic because if an attacker gets their computer infected, they now can create as many peers as they want.
Instead of trying to keep attackers out of the network, I propose the idea of making every peer pay
a
price for getting into the network. The peers should start with an "untrusted" status and become
"trusted" only after they give the server something we want. This could be a certain amount of
time
mining monero for example.
With this system, the "untrusted" peers cannot get a very limited amount of information from other
peers. They are able of relaying instructions to other peers. This way, you allow the attacker to get
information only if they help the network function and they potentially make you some money.
How do we know the state of a peer? This could be fixed in a centralized manner, just by asking the
server about the peer. This is good if you want to verify peers by making them mine crypto. Peers could send
their proof of work to the server, the server verifies that it is valid and lists the peer as trusted on a
database.
The decentralized solution is to let every peer decide if a new peer is or not trusted. This way, peers will
only give information to already trusted peers. This would only work if some peers are trusted by default.
As
a solution to this, the only peers that start as untrusted would be the ones not directly created by the
server. So if a peer installs the program on another computer, the new peers is not trusted by anyone. As an
example, imagine you are a peer from the network. You have in your known peer list an untrusted one. After
this untrusted peer has sent you 10 instructions from the server and you have verified they have not been
modified, you can set the status of this peer to trusted.
A hybrid model would first ask the server if the peer is trusted, and update the local database accordingly.
When a peer changes the status of another peer to trusted, this would be communicated to the server and when
the server receives enough of these, it would update the global status of the peer.
So what is enough peers? We should set a minimum number of peers, but this absolute value is not good. We
also
need to check the number of peers that do not trust it. So we need to get the list of connected peers. This
should be fairly easy using DHT
and XOR to calculate the distance to peers. The hash of each peer would be the certificate given by
the
server. And so you would calculate how many of the neighbours of your peer trust it and judge it that
way.
Interesting links: