<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>jpcarzolio</title>
	<atom:link href="http://jpcarzolio.com/feed/" rel="self" type="application/rss+xml" />
	<link>http://jpcarzolio.com</link>
	<description>Juan&#039;s personal website and blog</description>
	<lastBuildDate>Wed, 25 Mar 2020 18:57:30 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.2.38</generator>
	<item>
		<title>Image &#8220;encryption&#8221;</title>
		<link>http://jpcarzolio.com/2015/image-encryption/</link>
		<comments>http://jpcarzolio.com/2015/image-encryption/#comments</comments>
		<pubDate>Fri, 14 Aug 2015 16:18:43 +0000</pubDate>
		<dc:creator><![CDATA[Juan]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[algorithms]]></category>
		<category><![CDATA[encryption]]></category>
		<category><![CDATA[graphics]]></category>
		<category><![CDATA[LCG]]></category>

		<guid isPermaLink="false">https://jpcarzolio.com/?p=158</guid>
		<description><![CDATA[Can images be encrypted? Well, of course &#8212; what kind of question is that? Any file &#8211;any bitstream&#8212; can be encrypted. But encrypting a file (image or otherwise) turns it into random-looking garbage. Therefore, in a sense, an encrypted image file would cease to be an image file at all (e.g. you wouldn&#8217;t be able...]]></description>
				<content:encoded><![CDATA[<p>Can images be encrypted? Well, of course &#8212; what kind of question is that? <em>Any</em> file &#8211;any <em>bitstream</em>&#8212; can be encrypted. But encrypting a file (image or otherwise) turns it into random-looking garbage. Therefore, in a sense, an encrypted image file would cease to be an image file at all (e.g. you wouldn&#8217;t be able to open it in Photoshop or Gimp, transcode it to a different format, or display it directly in a web browser). There may be image format extensions that allow encryption, but I never heard of them, and many applications wouldn&#8217;t support them.</p>
<p>But what about encrypting the image data itself, the pixels, rather than the file as a whole? That can certainly be done, but using a standard encryption algorithm would only make sense with an uncompressed format that stores raw data (like BMP), because otherwise the compression algorithm would likely perform poorly on the encrypted data, and in the case of lossy compression, it would mess it up. Also, depending on the cipher, the encrypted data (ciphertext) might be longer than the original (due to padding), and that would mess up the image or require a change in dimensions.</p>
<p>Since standard encryption algorithms cannot be used, alternative, <em>graphical</em> methods must be used instead. A quick google search for &#8220;image encryption&#8221; summons lots of results with (apparently) advanced stuff, but here I will discuss a very simple method I invented to &#8220;encrypt&#8221; images that has a few advantages. I write &#8220;encrypt&#8221;, in quotes, because this simple method is not cryptographically secure at all, but it may be used to prevent image hotlinking, or to make it hard for someone to steal (especially mass-steal) images from your server or app, etc. In fact, I came up with this technique while working on a <a href="http://apps.facebook.com/playmegaslots">game</a>, to protect my artwork, because a few images from other apps of mine had been stolen, and I was a little paranoid back then.</p>
<p>Perhaps the best term to describe this method is <em>scrambling</em> rather than encryption. The idea is very simple: slice the image into small squares and shuffle them, making it look like an unsolved picture puzzle.</p>
<p>The shuffling has to look random, but it must be deterministic and repeatable in order to be easily reversible. This can be achieved with any PRNG (pseudo-random number generator), like the stock ones provided in most platforms (e.g  <code>Math.random()</code> in JavaScript). The seed can be thought of as the &#8220;encryption&#8221; key: each seed number will yield a different block arrangement, and &#8220;decryption&#8221; is performed by reversing the shuffle process using the same seed.</p>
<p>There&#8217;s a problem, though: using stock <code>random()</code> functions will not work across platforms! Each language/platform may use a different algorithm &#8211;or different algorithm settings&#8211; and this may even happen within a single &#8220;platform&#8221;, as is the case with JavaScript in browsers (each one has a different <code>Math.random()</code> implementation).</p>
<p>The solution for this is simple: we just write our own PRNG! Since this method is not meant to be cryptographically secure, we can use a simple PRNG like an LCG (linear congruntial generator), which is really easy to code (it just involves a state variable, three carefully chosen constants, one multiplication, one addition, one modulus and one division). My Javascript version is:</p>
<p></p><pre class="crayon-plain-tag">var LCG = function(seed) {
  this.state = seed;
};
(function() {
  var m = 4294967296,
      a = 1664525,
      c = 1013904223;

  LCG.prototype.rand = function() {
    this.state = (this.state * a + c) % m;
    return this.state / m;
  }
})();</pre><p></p>
<p>One last thing to consider is block size: how small should the squares be? It is clear that, the smaller they are, the harder it will be to reconstruct the image without the key, and that is visually intuitive: if we just slice an image into four quadrants, it&#8217;s trivial to figure out their order in one glance, as the image is still perfectly clear. If we split each quadrant into another four quadrants, and so on, it quickly becomes much harder to guess what it is you are looking at. The extreme case would be to use 1&#215;1 blocks, effectively shuffling individual pixels into a big block of colourful noise.</p>
<p>As we shrink the blocks, however, two problems may arise. One is that the computational cost increases, but this may not be much of an issue, since it&#8217;ll probably be fast enough anyway. The real problem is image compression: the more the image looks like noise, the worse all algorithms will perform. This is because large solid color areas, repeating patterns or smooth transitions (which are features that different algorithms target or expect) will be destroyed.</p>
<p>I planned to use JPEG, and in that specific case, scrambling not only causes poor compression but may also introduce undesirable artifacts or noise in the image, particularly when low quality settings are used. Luckily, knowing a little about the internals of the JPEG algorithm allowed me to solve that problem: since JPEG slices the image in 8&#215;8 squares (or 16&#215;16, or 16&#215;8  in some cases) and processes those independently, minimum interference is achieved by using 8 or 16 as the scrambler&#8217;s block size.</p>
<p>The following images illustrate all this. Consider this source image:</p>
<p><a href="http://jpcarzolio.com/extra/demos/imgScrambler/landscape.jpg"><img src="http://jpcarzolio.com/extra/demos/imgScrambler/landscape.jpg"
alt="Our source image"
/></a></p>
<p>Its file size is 285.9 KB. Scrambling it with a block size of 4, we get this one:</p>
<p><a href="http://jpcarzolio.com/wp-content/uploads/2015/08/canvas4.jpg"><img src="http://jpcarzolio.com/wp-content/uploads/2015/08/canvas4.jpg" width="1024" height="640" class="alignnone size-full wp-image-159"
alt="After scrambling with block size 4" /></a></p>
<p>which is pretty scrambled indeed (you can&#8217;t tell what the original image looked like). But this image has a file size of 566.8 KB. Twice as big! Here we observe the scrambling interefering with compression. On the other hand, if we use a block size of 8 to minimize interference, we get:</p>
<p><a href="http://jpcarzolio.com/wp-content/uploads/2015/08/canvas8.jpg"><img src="http://jpcarzolio.com/wp-content/uploads/2015/08/canvas8.jpg" width="1024" height="640" class="alignnone size-full wp-image-160"
alt="After scrambling with block size 8" /></a></p>
<p>which is still pretty unintelligible, but has a file size of 286.2 KB. The size difference is now only just about 300 bytes!</p>
<p>You can try a <a href="http://jpcarzolio.com/extra/demos/imgScrambler/image_scrambler.html" target="_blank">demo</a>, or check my JavaScript <a href="https://github.com/jpcarzolio/img-scrambler" target="_blank">source code</a> on GitHub. I adapted it from a previous ActionScript version I had written for my game.</p>
]]></content:encoded>
			<wfw:commentRss>http://jpcarzolio.com/2015/image-encryption/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Beware table locking!</title>
		<link>http://jpcarzolio.com/2015/beware-table-locking/</link>
		<comments>http://jpcarzolio.com/2015/beware-table-locking/#comments</comments>
		<pubDate>Thu, 13 Aug 2015 14:24:04 +0000</pubDate>
		<dc:creator><![CDATA[Juan]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[locking]]></category>
		<category><![CDATA[MySQL]]></category>

		<guid isPermaLink="false">https://jpcarzolio.com/?p=155</guid>
		<description><![CDATA[MySQL users have a few different storage engines to choose from for each of their tables, the most popular/well-known being MyISAM and InnoDB. Each has its own pros and cons, but there are several features that make InnoDB the best choice for most cases. One of these is row level locking, as opposed to MyISAM&#8217;s...]]></description>
				<content:encoded><![CDATA[<p>MySQL users have a few different storage engines to choose from for each of their tables, the most popular/well-known being MyISAM and InnoDB. Each has its own pros and cons, but there are several features that make InnoDB the best choice for most cases. One of these is <em>row level locking</em>, as opposed to MyISAM&#8217;s <em>table level locking</em>.</p>
<p>What&#8217;s the difference? Simply put, when writing to an InnoDB table &#8211;as part of an insert or an update&#8211; the row being written is locked, and all other operations on that row must wait until the write operation finishes. That&#8217;s pretty reasonable, and won&#8217;t normally cause any problems. On the other hand, when writing to a MyISAM table, <em>the whole table</em> is locked, and all other operations on <em>the whole table</em> must wait until the write operation finishes.</p>
<p>You won&#8217;t normally want to perform lots of simultaneous writes to a single row, but a web application under heavy load may easily be performing many concurrent writes (and reads) on a given table. And if that table&#8217;s engine is MyISAM… things can get pretty slow.</p>
<p>As an example, a few years ago, while working on a high traffic Facebook app, we started getting complaints about some parts of the app being slow. It turned out that some update queries where taking 10+ seconds to complete! A little investigation revealed the cause: MyISAM&#8217;s table locking. By switching to InnoDB, everything started running smoothly (way below 1 second) again.</p>
]]></content:encoded>
			<wfw:commentRss>http://jpcarzolio.com/2015/beware-table-locking/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>A promising new PRNG (Pseudo Random Number Generator)</title>
		<link>http://jpcarzolio.com/2015/a-promising-new-prng-pseudo-random-number-generator/</link>
		<comments>http://jpcarzolio.com/2015/a-promising-new-prng-pseudo-random-number-generator/#comments</comments>
		<pubDate>Wed, 05 Aug 2015 18:36:49 +0000</pubDate>
		<dc:creator><![CDATA[Juan]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[LCG]]></category>
		<category><![CDATA[PCG]]></category>
		<category><![CDATA[PRNGs]]></category>
		<category><![CDATA[randomness]]></category>

		<guid isPermaLink="false">https://jpcarzolio.com/?p=152</guid>
		<description><![CDATA[I&#8217;m no expert in the subject of PRNGs, but I find it quite interesting. Developers usually don&#8217;t give much thought to this and just use whatever rand(), random(), Math.random() function is available in their language of choice. And that&#8217;s fine for most purposes, but there are situations where (pseudo) randomness matters, and different algorithms may...]]></description>
				<content:encoded><![CDATA[<p>I&#8217;m no expert in the subject of PRNGs, but I find it quite interesting. Developers usually don&#8217;t give much thought to this and just use whatever <code>rand()</code>, <code>random()</code>, <code>Math.random()</code> function is available in their language of choice. And that&#8217;s fine for most purposes, but there are situations where (pseudo) randomness matters, and different algorithms may have very different quality and properties. For instance, PHP users may have noticed that there are two functions to get a random number: <code>rand()</code> and <code>mt_rand()</code>. In this case, the latter, based on the Mersenne Twister algorithm, was introduced as a better alternative to the former (which uses the underlying system implementation).</p>
<p>While googling for different generators and their properties, I recently stumbled upon a website that describes a new algorithm and shows some comparisons between that novel approach and many other popular ones (including LCGs, Mersenne Twister, etc) on different quality metrics (statistical quality, prediction difficulty, time and space efficiency, and more). It&#8217;s called <a href="http://www.pcg-random.org/">PCG</a>, (Permuted Congruential Generator), and, judging by the comparisons shown there, it beats the crap out of most other generators!</p>
<p>What makes it so good? Well, as the site states, the PCG family combines properties not previously seen together in a single algorithm, including: small code size, low memory and CPU usage, powerful features (jump ahead, distance), low predictability, excellent performance in statistical tests, and a permissive open source license (the Apache license).</p>
<p>How does it achieve all that? Its power apparently comes from the combination of an LCG (Linear Congruential Generator) as its state-transition function (the function that dictates how the internal state changes every time a new number is returned) and a new technique called <em>permutation functions on tuples</em> as its output function (the one that turns the internal state into the actual random number returned).</p>
<p>LCG is one of the simplest and fastest algorithms known, and has some desirable features, but used on its own it has horrible statistical performance (among other drawbacks) . It seems that by combining it with a clever output function, though, an excellent generation scheme is achieved.</p>
<p>The <a href="http://www.pcg-random.org/">website</a> is full of interesting information and explanations, and you can download the full technical paper and the reference C/C++ implementations.</p>
]]></content:encoded>
			<wfw:commentRss>http://jpcarzolio.com/2015/a-promising-new-prng-pseudo-random-number-generator/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Memcached extensions for PHP: some caveats</title>
		<link>http://jpcarzolio.com/2015/memcached-extensions-for-php-some-caveats/</link>
		<comments>http://jpcarzolio.com/2015/memcached-extensions-for-php-some-caveats/#comments</comments>
		<pubDate>Mon, 03 Aug 2015 20:19:32 +0000</pubDate>
		<dc:creator><![CDATA[Juan]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[debugging]]></category>
		<category><![CDATA[memcached]]></category>
		<category><![CDATA[PHP]]></category>

		<guid isPermaLink="false">https://jpcarzolio.com/?p=143</guid>
		<description><![CDATA[There are two PHP extensions to work with Memcached, which go by the somewhat unfortunate names of Memcache and Memcached (note the missing ending &#8216;d&#8217; in the first one). In this post I&#8217;m going to share my experience of using them together, either to migrate from one to the other or use them simultaneously, and...]]></description>
				<content:encoded><![CDATA[<p>There are two PHP extensions to work with Memcached, which go by the somewhat unfortunate names of <a href="http://php.net/manual/en/book.memcache.php">Memcache</a> and <a href="http://php.net/manual/en/book.memcached.php">Memcached</a> (note the missing ending &#8216;d&#8217; in the first one). In this post I&#8217;m going to share my experience of using them together, either to migrate from one to the other or use them simultaneously, and I&#8217;ll also describe a really strange issue I once ran into, and how to avoid it. As in my <a href="http://jpcarzolio.com/2015/memcached-and-careless-preloading/">previous post</a>, I ran into the problems described below while working on a high traffic Facebook app a few years ago.</p>
<p>One day, after some code changes, our memcached servers started appearing to behave in a strange way. Intermittently, in spans of several minutes, nothing would work. All caching operations &#8212; set, get, add, anything &#8212; would fail. The problem had clearly started after that last code push, but the pushed code looked innocent enough, and the strangest part was that caching operations were failing all around, not just within the new code.</p>
<p>Upon further inspection, I found that the problem was not on the servers&#8217; side: all operations worked fine when talking directly to a server via telnet, and some network monitoring revealed that  the webservers were not actually talking to the memcached servers at all! So it was a client problem, but a weird one: some innocent caching code in one part of the app &#8212; which, by the way, didn&#8217;t involve any configuration or setting changes &#8212; was somehow breaking cache access in the whole app.</p>
<p>At that point, we had always been using the Memcache extension only, and I suspected there might be a bug in it that was somehow triggered by our latest code. So I decided to try the other extension, Memcached, to avoid this hypothetical bug.</p>
<p>The extensions&#8217; APIs are pretty similar, since they expose the same underlaying functionality, but not identical. That means you can&#8217;t just swap <code>new Memcache()</code> with <code>new Memcached()</code> and leave the rest of the code intact. Among the differences are:</p>
<ul>
<li>Each has different <code>addServer()</code> parameters</li>
<li>Memcached has <code>get()</code> and <code>getMulti()</code>, whereas Memcache only has <code>get()</code>, to which you can pass either a single key or an array of keys</li>
<li>Each has different parameters for <code>set()</code>, <code>add()</code>, etc. because Memcache allows setting flags and Memcached doesn&#8217;t</li>
</ul>
<p>Since Memcache had worked perfectly fine up to that point, and I had no proof of an actual bug in it &#8212; nor any guarantees that its quasi-homonymous replacement would be any better &#8212; I wasn&#8217;t willing to ditch it altogether and rewrite all the code to use the other extension. So we wrote an adapter class to wrap the new extension and use it with our existing code. The idea was to be able to use both interchangeably, switching from one to the other to see whether the bug disappeared when switching to Memcached.</p>
<p>But they didn&#8217;t get on well with each other using that simple approach, and new problems came up. Sometimes Memcache would read back a chunk of binary garbage when an object was stored using Memcached, or they would fail to deserialize arrays or numbers when reading objects stored by each other, returning strings instead. I was trying to solve a problem, but was creating new problems instead…</p>
<p>After some struggle, I finally figured things out. The binary garbage was due to compression: by default, Memcached gzips values bigger than 100 bytes. So some objects were compressed and others were not, and that caused Memcache to return some objects as binary garbage. That was easily solved by disabling compression. The serialization issues were due to the fact that both extensions use the flags field (or part of it) to indicate data types, but they use different conventions (storing the type is needed because strings and ints are stored unmodified, but arrays and objects are serialized). The solution was to handle serialization in the adapter (serializing everything except integers) and only pass strings to the extensions&#8217; methods, taking advantage of the fact that both extensions store strings unserialized and with a flags value of zero.</p>
<p>So, with all those fixes in place, we were finally able to use both extensions interchangeably… only to find that the original bug was also present when using Memcached! The same mysterious behaviour! Back to square one…</p>
<p>I had to look somewhere else. What about the latest code? I scrutinized that &#8220;innocent&#8221; code, the one triggering the bug in the first place, one more time. It performed <code>increment</code> and <code>decrement</code> operations, which we had never used before. That got me thinking, and led me to the <a href="https://raw.githubusercontent.com/memcached/memcached/master/doc/protocol.txt">memcached protocol documentation</a>, where it said the <code>incr</code>  and <code>decr</code> commands only worked on decimal representations of <em>unsigned</em> integers. It turned out we were using them on some values initialized with -1. Bingo! There was clearly a problem there, since those operations were guaranteed to fail on -1. But what did those specific failed operations have to do with system-wide malfunction?</p>
<p>Finally, I found the last piece of the puzzle: it appears that both Memcache and Memcached have checks in place to temporarily block a server that is malfunctioning (around 3 and 15 minutes, respectively). And when the server returns an error message on <code>incr</code> /  <code>decr</code> failure, they both seem to misinterpret that as server malfunction, blocking the server for several minutes and causing all operations to fail.</p>
<p>All that trouble caused by a negative number! Well, no. Actually, the real culprit is the unfortunate error handling in Memcached clients, of course.</p>
]]></content:encoded>
			<wfw:commentRss>http://jpcarzolio.com/2015/memcached-extensions-for-php-some-caveats/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Memcached and careless preloading</title>
		<link>http://jpcarzolio.com/2015/memcached-and-careless-preloading/</link>
		<comments>http://jpcarzolio.com/2015/memcached-and-careless-preloading/#comments</comments>
		<pubDate>Thu, 30 Jul 2015 14:32:51 +0000</pubDate>
		<dc:creator><![CDATA[Juan]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[debugging]]></category>
		<category><![CDATA[memcached]]></category>
		<category><![CDATA[optimization]]></category>

		<guid isPermaLink="false">https://jpcarzolio.com/?p=139</guid>
		<description><![CDATA[This is a short story about how I became aware of the dangers of &#8220;careless preloading&#8221;, while learning a bit about memcached internals along the way. A few years ago, while working on a high traffic app on the Facebook platform, I ran across a caching bug. All of a sudden, our memcached servers had...]]></description>
				<content:encoded><![CDATA[<p>This is a short story about how I became aware of the dangers of &#8220;careless preloading&#8221;, while learning a bit about memcached internals along the way.</p>
<p>A few years ago, while working on a high traffic app on the Facebook platform, I ran across a caching bug. All of a sudden, our memcached servers had stopped working &#8212; kind of. Stuff could be read, but nothing could be added to the cache: all <code>set</code> operations failed. Actually, it was stranger than that: <em>a few</em> sets did go through.</p>
<p>I had just fixed a broken preload script (which had never actually worked but nobody had ever noticed!) shortly before caching hell broke loose, so that script was my primary suspect. I decided to try the app on a test environment with and without that script, and bingo: without that preload, memcached worked normally, whereas running that preload caused the erratic cache behaviour we were experiencing in production.</p>
<p>So, what&#8217;s a <em>preload</em>, anyway? It&#8217;s a script that fills the cache with data beforehand, in order to avoid having low performance while the cache is gradually fed objects after each miss. I hadn&#8217;t written any preloads, but back then I didn&#8217;t stop to think if they were a good or bad idea, either. And it turned out that in many cases they were a bad idea, because they are always a bad idea if done without proper consideration. Ultimately, it boils down to something that has long been known to be&#8230; <em>the root of all evil</em>. Yep, I&#8217;m talking about <em>premature optimization</em>.</p>
<p>So, when I fixed that preload script (a trivial edit), it…  uhm, started working. And it turned out that its job was to select a whole freakin&#8217; table &#8212; about 1.5M rows &#8212; and load it into memcached. But, hey, that would be, at most, inefficient, right? In the worst case, it <em>might</em> displace useful data replacing it with useless data, but things would fix themselves with usage, right? Wrong! Enter memcached <em>slabs</em>.</p>
<p>For speed and efficiency reasons, Memcached has a custom memory manager, which consists of &#8220;slabs&#8221;, each of which can be assigned any number of 1MB &#8220;pages&#8221;, which are in turn split into a number of equally sized &#8220;chunks&#8221;, each of which may hold an individual object. Slabs hold objects within a specific size range, starting at 88 bytes (I think) and growing exponentially in steps of 1.25x. So for instance there may be a 1280 byte slab, which contains any number of 1MB pages split into many chunks of 1280 bytes, each holding (unless empty) an object whose size will be under 1280 bytes (including key and flags) and above 1024 bytes (the maximum allowed for the previous slab). When you perform a <code>set</code>, Memcached looks at the object size and determines which slab it belongs in, and looks for a free chunk in one of the pages, assigning the slab a new page if needed (i.e. if all its pages are full, or it has no pages at all because no objects of this size were stored before). And once assigned to a slab, a page stays assigned forever and it can&#8217;t be reassigned.</p>
<p>That explains the problem. Our preload scripts were executed after we had to resize our cache pool, or restart servers after security updates, or restart the daemon after a mysterious crash, or migrate to a different EC2  instance type, so they acted on an empty cache (no pages assigned). And this script was storing 1.5M objects with sizes in a rather specific range, causing all or most of the pages to be assigned to a few specific slabs, leaving none or too few available for the rest of the slabs. After a short while, the result was that, unless an incoming object&#8217;s size happened to match one of the few existing slabs, it was discarded. Regardless of the amount of empty space or stale data, those objects didn&#8217;t make it.</p>
<p>So the fix consisted in just removing that preload. The fact that we had never noticed that this particular preload was broken hinted that it wasn&#8217;t really necessary after all. And after some investigation and testing, it turned out that in normal operation &#8212; that is, caching objects on demand &#8212; only around 35k of the 1.5M objects were stored in the cache during the first hour, and there was little to no performance impact during this ramp-up period.</p>
<p>My point is not that preloading itself is inherently bad. Storing those 1.5M objects upfront could have been a good idea in some situation, but it wasn&#8217;t our case. My point is: before preloading data &#8212; before optimizing <em>anything</em> &#8212; make sure it&#8217;s necessary. If it is, make sure you&#8217;re being selective enough to preload useful data. And keep in mind that careless preloading may not only be useless or inefficient, but possibly harmful, as there may be unforeseen side effects.</p>
]]></content:encoded>
			<wfw:commentRss>http://jpcarzolio.com/2015/memcached-and-careless-preloading/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>First post: on why and what</title>
		<link>http://jpcarzolio.com/2015/first-post-on-why-and-what/</link>
		<comments>http://jpcarzolio.com/2015/first-post-on-why-and-what/#comments</comments>
		<pubDate>Tue, 28 Jul 2015 19:09:26 +0000</pubDate>
		<dc:creator><![CDATA[Juan]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">https://jpcarzolio.com/?p=130</guid>
		<description><![CDATA[I&#8217;ve always liked the idea of writing, almost since I was a kid. With the advent of blogs I felt it might be a good idea to start one, but I didn&#8217;t give much thought to the matter and never took the time to make it happen. Besides, I didn&#8217;t have a single, specific subject...]]></description>
				<content:encoded><![CDATA[<p>I&#8217;ve always liked the idea of writing, almost since I was a kid. With the advent of blogs I felt it might be a good idea to start one, but I didn&#8217;t give much thought to the matter and never took the time to make it happen. Besides, I didn&#8217;t have a single, specific subject to write about; I&#8217;m a guy of many interests, including maths, computer science, music, natural sciences, philosophy and literature. Leaving computer science aside, my knowledge in most of those other subjects is rather limited, and although I often find them fascinating, it would be difficult for me to write much about a single one.</p>
<p>But now I finally got down to it. I decided to build a personal website and blog, a place to host my CV, showcase my work, and to share experiences, ideas, and projects (and possibly some non-technical stuff too, to include my other interests).</p>
<p>On the technical side, I&#8217;m planning to build a sort of portfolio, but since most of my previous professional work is not displayable (closed source and/or no public demo/live version to point to) I&#8217;ll probably take some personal projects, maybe tidy them up a bit, and open source them, hosting them on Github or some similar service. That may take some time, though.</p>
<p>Among the old personal projects I have lying around there is a raycasting game engine with a map editor (old-school pseudo-3D, like Doom or Duke3D), a &#8220;modern&#8221; 2D game engine which uses OpenGL, and a game audio engine, all written in Java.</p>
<p>Among newer ones there&#8217;s a small cloud management system written in PHP, which aims to be like a stripped-down version of Rightscale (allowing to launch servers based on templates and monitor them, etc.), and the latest one is an &#8220;interactive data plotter&#8221; written in JavaScript, which I developed to help me study bitcoin trading data (as part of an ongoing project of mine).</p>
<p>I also have several ideas for new personal online projects, including trivial ones (band name generator), intermediate ones (drum pattern editor/player), and involved ones (a JavaScript remake of the Mac version of the original Prince of Persia game).</p>
<p>On the non-technical side, I may write about maths (stuff I &#8220;discovered&#8221;, some proofs, alternative ways of understanding certain things), music (maths behind it, my favourite artists), my favourite works of literature, and philosophical topics (implications of AI, mind-body problem).</p>
<p>I haven&#8217;t yet decided if it&#8217;ll be best to keep the non-technical stuff in a separate blog, or have everything together. I&#8217;m also considering writing some of the non-technical stuff in Spanish. I&#8217;ll see when I get to it. If you&#8217;re reading this, welcome to my site and I hope you enjoy my posts!</p>
]]></content:encoded>
			<wfw:commentRss>http://jpcarzolio.com/2015/first-post-on-why-and-what/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
