Warung Bebas

Rabu, 18 Agustus 2010

Tropical Plant Fats: Coconut Oil, Part I

Traditional Uses for Coconut

Coconut palms are used for a variety of purposes throughout the tropics. Here are a few quotes from the book Polynesia in Early Historic Times:
Most palms begin to produce nuts about five years after germination and continue to yield them for forty to sixty years at a continuous (i.e., nonseasonal) rate, producing about fifty nuts a year. The immature nut contains a tangy liquid that in time transforms into a layer of hard, white flesh on the inner surface of the shell and, somewhat later, a spongy mass of embryo in the nut's cavity. The liquid of the immature nut was often drunk, and the spongy embryo of the mature nut often eaten, raw or cooked, but most nuts used for food were harvested after the meat had been deposited and before the embryo had begun to form...

After the nut had been split, the most common method of extracting its hardened flesh was by scraping it out of the shell with a saw-toothed tool of wood, shell, or stone, usually lashed to a three-footed stand. The shredded meat was then eaten either raw or mixed with some starchy food and then cooked, or had its oily cream extracted, by some form of squeezing, for cooking with other foods or for cosmetic or medical uses...

Those Polynesians fortunate enough to have coconut palms utilized their components not only for drink and food-- in some places the most important, indeed life-supporting food-- but also for building-frames, thatch, screens, caulking material, containers, matting, cordage, weapons, armor, cosmetics, medicine, etc.
Mainstream Ire

Coconut fat is roughly 90 percent saturated, making it one of the most highly saturated fats on the planet. For this reason, it has been the subject of grave pronouncements by health authorities over the course of the last half century, resulting in its near elimination from the industrial food system. If the hypothesis that saturated fat causes heart disease and other health problems is correct, eating coconut oil regularly should tuck us in for a very long nap.

Coconut Eaters

As the Polynesians spread throughout the Eastern Pacific islands, they encountered shallow coral atolls that were not able to sustain their traditional starchy staples, taro, yams and breadfruit. Due to its extreme tolerance for poor, salty soils, the coconut palm was nearly the only food crop that would grow on these islands*. Therefore, their inhabitants lived almost exclusively on coconut and seafood for hundreds of years.

One group of islands that falls into this category is Tokelau, which fortunately for us was the subject of a major epidemiological study that spanned the years 1968 to 1982: the Tokelau Island Migrant Study (1). By this time, Tokelauans had managed to grow some starchy foods such as taro and breadfruit (introduced in the 20th century by Europeans), as well as obtaining some white flour and sugar, but their calories still came predominantly from coconut.

Over the time period in question, Tokelauans obtained roughly half their calories from coconut, placing them among the most extreme consumers of saturated fat in the world. Not only was their blood cholesterol lower than the average Westerner, but their hypertension rate was low, and physicians found no trace of previous heart attacks by ECG (age-adjusted rates: 0.0% in Tokelau vs 3.5% in Tecumseh USA). Migrating to New Zealand and cutting saturated fat intake in half was associated with a rise in ECG signs of heart attack (1.0% age-adjusted) (2, 3).

Diabetes was low in men and average in women by modern Western standards, but increased significantly upon migration to New Zealand and reduction of coconut intake (4). Non-migrant Tokelauans gained body fat at a slower rate than migrants, despite higher physical activity in the latter (5). Together, this evidence seriously challenges the idea that coconut is unhealthy.

The Kitavans also eat an amount of coconut fat that would make Dr. Ancel Keys blush. Dr. Staffan Lindeberg found that they got 21% of their 2,200 calories per day from fat, nearly all of which came from coconut. They were getting 17% of their calories from saturated fat; 55% more than the average American. Dr. Lindeberg's detailed series of studies found no trace of coronary heart disease or stroke, nor any obesity, diabetes or senile dementia even in the very old (6, 7).

Of course, the Tokelauans, Kitavans and other traditional cultures were not eating coconut in the form of refined, hydrogenated coconut oil cake icing. That distinction will be important when I discuss what the biomedical literature has to say in the next post.


* Most also had pandanus palms, which are also tolerant of poor soils and whose fruit provided a small amount of starch and sugar.

Theorem 8.10. P ≠ NP.

Nearly two weeks ago Indian-American mathematician Vinay Deolalikar, the employee of HP Labs, sent a letter to a group of mathematicians asking them to proof-read his attempt to prove P ≠ NP. The pre-print was eventually distributed over the Internet, so a lot of people became aware. Later two major flaws were found in the proof, so it is now considered wrong. Nevertheless, Vinay has removed the paper from his page and commented that he fixed all the issues and is going to submit a paper to the journal review. You can download the original paper from my web-site [Deolalikar, 2010].

We'll see if he is bluffing or not. In any case, it is very pleasant to me that the main role in the proof is played by... the graphical probabilistic model framework. Yeah, those graphical models we use in computer vision! It was surprising indeed, since the problems in computational complexity are usually discrete and well-posed. So, this post is devoted to understanding why a graphical model emerges in the proof.

First, I am going to review some key points of the proof. It is far from being exhaustive, just some rough sketches that are critical for understanding. Then I report on the flaws found in the proof, and then I end up with a methodological note on the whole P vs. NP problem.

k-SAT and d1RSB Phase

In order to prove P ≠ NP, it is enough to show the absence of a polynomial algorithm for some problem from the class NP, e.g. for some NP-complete problem (which are the "hardest" problems in NP). The boolean satisfiability problem (SAT) addresses the issue of checking if there is at least one assignment of the boolean variables in the formula, represented in conjunctive normal form (CNF), which makes it TRUE. The k-satisfiability problem (k-SAT) is a particular case of SAT, where all the clauses in the CNF have order k. For example, (x˅y)&(¬x˅y)&(x˅¬y)&(¬x˅¬y) is an instance of 2-SAT which has the answer 'NO', since any boolean values of (x, y) makes the formula false. (x˅y˅z)&(¬x˅u˅v)&(¬x˅u˅¬z) is a 3-SAT over 5 variables (x, y, z, u, v), which is satisfiable, e.g. on the input (1,0,0,1,0). 2-SAT is in P, but k-SAT is NP-complete whenever k > 2. The Deolalikar's idea is to show that k-SAT (with k > 8) is outside of P, which (if true) proves that there is a separation between P and NP.

The space of possible k-SAT solutions is analysed. Let m be the number of clauses in the CNF, n be the number of variables. Consider the ratio α = m/n. In the border case m = 1 (α is small), k-SAT is always true, there are usually a lot of inputs that satisfy the CNF. Consider an ensemble of random k-SATs (the following is statistical reasoning). With the growth of α, the probability of the CNF to be satisfiable decreases. When a certain threshold αd is reached, the "true" solutions set breaks into clusters. This stage is known in statistical mechanics as dynamical one step replica symmetry breaking (d1RSB) phase. Moving further, we make the problem completely unsatisfiable. It turns out that d1RSB stage subproblems are the most challenging of the all k-SAT problems. The effect could be observed only if k > 8, that's why such problems are used in the proof. [Deolalikar, 2010, Chapter 6]


FO(PFP) and FO(LFP)

In the finite model theory there are recurrent extensions of the first-order logic. The predicate Pi(x) is evaluated as some second-order function φ(Pi-1, x), where x is a k-element vector, P0(x) ≡ false. In the general case, either there is a fixed point, or is Pi looping. For example, if φ(P, x) ≡ ¬P(x), then P0(x) ≡ false, P1(x) ≡ true, P2(x) ≡ false, etc. Here x is meaningless, but it is not always the case. Consider the following definition: φ(P, x) ≡ max(x) ˅ P(x) // recall x is a vector, in the boolean case 'max' is the same as 'or'. If x contains at least one non-zero element, P0(x) = false, P1(x) = true, P2(x) = true, etc. Otherwise, Pi(0) = false for all i. In the case of looping, let's redefine the fixed point to be constantly false. FO(PFP) is a class of problems of checking if there will be a loop for some x, or a fixed point. They are actually very difficult problems. FO(PFP) is equal to the whole PSPACE. Suppose now that φ is monotonic on P. It means that P(x) only appears in the formula with even number of negations before it (or zero, as in the second example). This means that once Pi(x) is true, Pj(x) will be true for any j > i. This reduces the class to FO(LFP), which is proven to be equal to the class P. So, the global problem now is to show that k-SAT is outside of the class FO(LFP).

ENSP: the graphical model for proving P ≠ NP

So how a graphical model emerges here? Graphical model is a way to describe a multivariate probability distribution, i.e. dependencies of covariates in the distribution. Recall now the definition of NP. If we somehow guess the certificate, we are done (i.e. have a polynomial algorithm). If the space of certificates (possible solutions, in terms of Deolalikar) is simple, we can probably get a polynomial algorithm. What is simple? This means that the distribution is parametrizable with a small amount of parameters (c.f. intrinsic dimensionality), which allows us to traverse the solution space efficiently. This is twofold. First, the distribution is simple if it has a limited support, i.e. all the probability measure is distributed among a limited number of points of the probabilistic space. Second, it is simple if the covariates are as much conditionally independent as possible. In the ideal case, if all the groups of covariates are independent (recall that pairwise and mutual independence do not subsume each other!), we need only n parameters to describe the distribution (n is the number of covariates), while in general case this number is exponential. See [Deolalikar, 2010, p. 6] for examples.

How to measure the degree of conditional independence? Yes, to build a graphical model. It is factorized to cliques according to Hammersley-Clifford theorem. Larger cliques you get, stronger dependency is. When the largest cliques are k-interactions, the distribution can be parametrized with n2k independent parameters. Finally, Vinay shows that FO(LFP) can deal only with the distributions parametrizable with 2poly(log n) parameters, which is not the case for d1RSB k-SAT (its solution space is too complicated).

In order to show it strictly, Deolalikar introduces a tailored graphical model describing LO(LPF) iterative process, the Element-Neighborhood-Stage Product model (ENSP):

There are two types of sites: corresponding to the variables on the stages of LFP (small circles) and corresponding to the elements of witty neighbourhood system (some closure over Gaifman graph; big blue circles). When a variable is assigned 0 or 1, it is painted with red or green respectively. The last column corresponds to the fixed point, all the variables are assigned. Thus, this model is easily factorizable to the small cliques, and it cannot describe the imaginable LFP process for some k-SAT. See [Deolalikar, 2010, Chapter 8] for the details.

So what's the problem?

Neil Immerman, an expert in the finite model theory, noticed two major flaws. The first one is that Vinay actually model only monadic LFP, which is not equal to P, as assumed. He thus proved that there is a gap between NP and some subclass of P, which is not obligatory equals P. The second major flow deals with modelling k-SAT as a logical structure. I do not fully understand this step, but some order-invariance assumptions are wrong. Here is a discussion in Lipton's blog. According to it, the flaws are really fatal.

The third way

It seems that it is really difficult to prove P ≠ NP. The most of scientists think that P = NP is improbable, and that's why the hypothesis P ≠ NP is generally adopted now. But there is one more option: neither P = NP nor P ≠ NP is provable. As every mathematical theory, computational complexity is a set of axioms and statements, which are usually could be proved to be correct or not. But there are sometimes some statements formulated in terms of the theory, which could not. Moreover, according to the first Gödel's incompleteness theorem, in any mathematical theory based on our natural understanding of natural numbers, there is either a statement that could be proved both true or false, or an unprovable one. I know no actual reasons why this could not be the case for P ≠ NP (although I have feelings there are some since it is not usually taken into account).

Suppose it is really so. This would mean that the whole theory should be reconsidered. Some axioms or definitions will change to make this fact provable. But may be anything else will become unprovable. Who knows...

Regarding Quotas

In my post on nurture vs. nature, thehackerfairy asked about my thoughts on some comments left after a recent Guardian article on this topic. The article helps support my claim that really it's a social problem we're facing in CS, not some difference due to biology. And the article cites my favorite developmental neuroscientist, Lise Eliot, as support.

I couldn't find the particular comment HF was alluding to, but I think the basic idea was, "Well, if men and women are biologically equal, let's get rid of quotas / affirmative action / encouragement initiatives / etc." I've seen this argument pop up in other communities of underrepresented people as well.

Here's my opinion: I am more than happy to see all of these inclusion initiatives be eliminated... once the groups of people they're targeting stop being systematically discriminated against. And for anyone who doesn't think women, people of color, people with disabilities, and LGBT people are being systematically discriminated against in STEM, I say: please come out from under that rock.
 

ZOOM UNIK::UNIK DAN UNIK Copyright © 2012 Fast Loading -- Powered by Blogger