How is category theory useful in programming?

How is category theory useful in programming?

Other urls found in this thread:

jobs.target.com/job/sunnyvale/haskell-data-scientist/1118/2012182
math.mcgill.ca/triples/Barr-Wells-ctcs.pdf
en.wikipedia.org/wiki/Operator_associativity
en.wikipedia.org/wiki/Lazy_evaluation
stackoverflow.com/a/14250022
axiomsofchoice.org/Hask
axiomsofchoice.org/Haskell_type_system
en.wikipedia.org/wiki/Extensionality
en.wikipedia.org/wiki/Parametric_polymorphism
en.wikipedia.org/wiki/Brouwer–Heyting–Kolmogorov_interpretation
twitter.com/NSFWRedditImage

it's not. computers deal with finite objects from countable families. they're all sets under even very strict definitions.

categories are used in haskell.

It's not. It's a meme from Haskell programmers because they wanted to feel smug.

prove it

jobs.target.com/job/sunnyvale/haskell-data-scientist/1118/2012182

Scroll to the bottom

math.mcgill.ca/triples/Barr-Wells-ctcs.pdf

functors and classes over a finite domain are just functions and sets

the only "category theory" you need to know if what functions are, and haskell's relation to category theory is nothing but a meme

>Graph Theory, Algorithms on Algebraic Datatypes, Stochastic Modeling, Control Theory, Differential Equations.

>Category Theory

>our leadership consists of Ph.D.s in Operations Research, Machine Learning, Algebra, Algorithms and Astrophysics.

Fuck, I would suck dick to get that job. I could definitely pretend to be a mathematician there.

My job is similar but it is so much less specialized.

Ditto dream job.

as far as I see the only reason you need to know category theory is for you to acquire the math lingo related to morphisms

i guess that's fair, but that doesn't mean you're going to be using it in programming, does it? saying you need to know category theory in order to use some lingo to talk about finite objects with overkill generality is ridiculous

You're probably right (I don't know enough to say one way or another), but I want to learn category theory independent of any programming applications. I'm just using haskell as an excuse to justify why I am doing it. When really, I'm just doing it for the math. But I find it cool I can apply it to something.

>I can apply it to something.

Good for you, man. There's never a downside to learning new things, and you never know when having that extra way of looking at something will come in handy. Remember that things are only useless until somebody clever finds a new way to use them. Don't let this new wave of Veeky Forums faggots discourage you with their "useless meme" bullshit; they're literal retards, and if it were up to them science would have stopped with the Greeks.

>and if it were up to them science would have stopped with the Greeks.
What are you trying to say? That the square root of 2 is real? Fuck you, you little fuck brainlet.

...

Thanks man! I'll ignore the haters

How much do you think they pay?

No idea, should be a lot though.

Category theory is used to provide semantic of programming languages. Set semantics are rather poor, it's not the right way to think about programming. A type is not just a set, algorithms are not just fonctions. They have more structure than that. Category theory helps you capture this idea.

>A type is not just a set
An array is a set.

Can make those in C as easy as []

Not sure why some aspects being inherently countable would be a counter argument..

You may learn about adjoint functors (and their natural transformation as polymorphic functions) to clean up your system - work out the algebraic content of the type class, so to speak - and then you've already "used" category theory in programming.

E.g. that the set
[math] (A\times B) \to X [/math]
is isomosphic
[math] A \to X^B [/math]
Namely given a function [math] u [/math], so that [math] u(a,b)\in X [/math], you may define
[math] v_u(a):= b\mapsto u(a,b) [/math], which is a function in [math] A \to X^B [/math].
and, in the other direction, given [math] v [/math] so that [math] v(a) \in X^B [/math], you may define
[math] u_v(a,b):= u(x)(y) [/math], which is a function in [math] (A\times B) \to X [/math].

In category theory, you'd see that each object b gives rise to an adjoint functor relations. Define functors with object map
[math] FA:=A\times B [/math]
[math] GX:=X^B [/math],
and the above says
[math] Hom(FA,X) [/math] is iso to [math] Hom(A,GX) [/math].

If you do this is FinSet, you proved just a relation like
x^(a·b) = (x^b)^a.

In Haskell you have both type constructors & functors and function level functions.

Let
F a := (a,b) and G x := b->x

The adjunction of hom-sets
(F a) -> x a -> (G x)
i.e.
(a,b)->x a -> (b -> x)
given by (Haskell code)
phi f = \a -> \b-> f (a,b)
invphi g = \pair -> g (fst pair) (snd pair)

Fix B. Then we have
G (F x) = b->(x,b)
F (G x) = (b->x,b)
define monad and comonad via
return :: x -> G (F x)
return x = \b->(x,b)
eta :: F (G x) -> x
eta (BtoX,b) = BtoX b

(cont.)

Of, in category, you have the famous Yoneda lemma which says that
Hom(Hom(A,-),F-)
is in bijection with
FA
where F is a functor, Hom(A,-) is the functor mappying object B to the function space B^A and the outer Hom gives a functions space between functors, meaning a set of natural transformations, which is Haskell are polymorphic functions (functions that work for all types in the system, such as "the" identity function, existing for both strings and ints)

Now take e.g. F to be the functor mapping types A to lists of terms of type A,
e.g. integers -4, 6, 25,... to lists of such integers, e.g. [3,-4,6], [-4], [8,26],...

The claim now is that for any type B, there is a bijection that takes a term of FA, i.e. some list k of A terms, to a term of
Hom(Hom(A,B),FB)
i.e. a map that takes a functions u in B^A to a term of lists of B's.

Let e.g. be A integers and B strings, so that a function in B^A would be, as an exmaple, the map from the number 3 to the word "three".
Now indeed, if k is fixed a list of integers (say [3,-4,6]), then you can define the function that takes any -any- function u from integers to strings and maps it to the list obtained by applying u to each of the entries in in k.
That is to say, the term [3,-4,6] corresponds to 'the function that takes any function u and returns the list [u(3), u(-4), u(6)]'.
But the point is that you can code this all up, and polymophically, i.e. indenpendent of any particular type like the integers.

The point that I want to make is that category theory cleans up your type system by pointing out all the natural maps, everything that obeys algebraic laws (I mean category theory was literally invented in the 40's to define the polymorphic functions / functions that work for a whole category / natural transformations).

Sure, you can learn haskell and understand what the Yoneda lemma isomorphism is doing without knowing that it's a concept found and used first by homologists etc. in the 40's.

that one line should have been
[math] u_v(a,b):= (v(a))(b) [/math]

Yeah, it's a funny book. I think I like it, but I didn't read far for it being so slow.

I just want to add that it all depends on your goals. Most Haskell programmers also don't know much category theory. You'd learn it because you find it pretty.

funfact: pic related is executable Haskell code (if your editor accepts those letters) defining the List functor

e.g. for Int and String, you'd get
fmap : (Int -> String) -> (ListOf Int -> ListOf String)
fmap(f)(aList) := f(first_entry(aList)) append fmap(f)(rest_of(aList))
else
fmap(anyfunction)(anylist) := emptylist

but of course, you define the functor independent of the particular types, it works for all types in your system.

OP here, thanks for this excellent response. I'm just beginning my studies in both Category theory and programming by way of Haskell.

Keep this thread alive as I'm sure I'll want to ask you some basic questions; your response is a bit more advanced than where I've gotten in my studies. But it is highly fruitful and has terms in it I've seen mentioned in later chapters of my book (e.g. functor, natural transformation, monads, Yoneda lemma, hom, etc).

But I am pushing through the next chapter in my book tonight in addition to reading a blog that is advancing through the material at a quicker pace.

I'm very excited to have someone as mathematically equipped as you are on this board.

No it's not: sets are not ordered.

Sorry, deleted my comment to add in an additional edit. I think that is hilarious this is executable code.

Yes the book is a bit slow. This is why I'm reading through the blog to get to the punchline quicker. I like how it starts from basic set theory though.

My interest lies more on the math side, less on the software engineering side (I'm not a programmer but starting with Java and Haskell. I don't 'know' either yet).

I learned basic lambda calculus and some theorems, then decided I should look into Haskell. I like Haskell because I can find 'applications' from what I am learning in CT.

When you have notation C(a,b) or Homset(a,b) does that simply mean the arrow/morphism between object a and object b?

Additionally, what does it mean that a directed acyclic graph generates a partial order as its free category?

a partial order satisfies binary relation "≤" over a set P which is reflexive, antisymmetric, and transitive.. but how does that "look" for a graph?

What does it look like such a node in a graph to be reflexive? an arrow from a node a to itself?

What does it look like for such a node to be transitive or antisymmetric pictorially?

Is the transitive part just composition? So if C(a,b) and C(b,c) then C(a,c)?

Someone, please describe what is fundamentally different about Haskell vs procedural languages.

Also please describe applications where the language excels.

Not programming/math noob, just Haskell noob.

Retard who doesn't actually know any category theory detected.

I have a really dumb question but it's bothering me

>"A category where a morphism is a relation between objects: the relation of being less than or equal. Let’s check if it indeed is a category. Do we have identity morphisms? Every object is less than or equal to itself: check! Do we have composition? If a

>learning math beyond analysis

you really don't know what sets are if you think ordered tuples aren't sets

>cussing at an objectively correct post with 0 opinion
good job

>Not sure why some aspects being inherently countable would be a counter argument..
the point is they're all sets. there's no need for specialized machinery when all you're dealing with are sets and operation-preserving functions. the only thing that category theory is giving you is the lingo you're using to describe your morphisms

because

I still don't get it

We want to show (a

basically

a

what the fuck does it mean for

You tell me

I thought it'd mean this a

what do you mean by
a

Stop fucking around and prove that

how about you stop fucking around and define what the fuck you mean by

I don't know what the guy means by the And now for something completely different! A category where a morphism is a relation between objects: the relation of being less than or equal. Let’s check if it indeed is a category. Do we have identity morphisms? Every object is less than or equal to itself: check! Do we have composition? If a

the class of morphisms is the pairs (a,b) that belong to

>Consider the expression a ~ b ~ c. If the operator ~ has left associativity, this expression would be interpreted as (a ~ b) ~ c. If the operator has right associativity, the expression would be interpreted as a ~ (b ~ c). If the operator is non-associative, the expression might be a syntax error, or it might have some special meaning.
en.wikipedia.org/wiki/Operator_associativity

...

How is that any different than what I wrote here:
-- it still doesn't answer my question if I am parsing it correctly.

ok so the operator is composition and the relation between C(a,b) is

the class of morphisms is the set

Okay that makes sense then.

Thank you both for clearing that up.

Is this the correct way to parse it:

(a,b)o((b,c)o(c,d))
(a,b)o(b,d)
(a,d)

yeah

OK cool.

Similarly,

((a,b)o(b,c))o(c,d)
(a,c)o(c,d)
(a,d)

I know that's the answer but the only step that confuses me is (a,c)o(c,d) due to the right to left rule where you apply (c,d) before (a,c) but I don't know how to apply that in this context? It just makes sense to connect (a,c) to (c,d) but if you let f=(a,c) and g=(c,d) then fog means g before f.

How does that apply here?

It comes up in the parts of quantum physics which are useful for computation.

It also factors into schema design for database programming.

Haskell has many differences from other languages. The main differences are, the language is lazy, the language is purely functional (lisp and erlang are not pure), and the language has a badass type system.

Lazy means that a function only gets executed once it's value is needed. This lets you compose functions in interesting ways in order to create new functions that end up being more efficient than if you had just naively executed them one after the other.
en.wikipedia.org/wiki/Lazy_evaluation

Functional programming is something that often sounds very strange to people who are used to other languages but once you get used to it it's stranger to think that people program in other ways. Essentially what this means is that in a Haskell program you primarily write functions. These functions have an input and output defined by the typesystem and they don't affect program state (they are completely encapsulated and deterministic). You do not have global variables (the only values a function can access are the values you pass into it as inputs), and you don't have program state either.

This makes it so that some things that are trivial in other languages suddenly become difficult in Haskell. In particular any sort of IO suddenly becomes non-trivial (how can you write a deterministic function that emits user input if the user is not deterministic?, how can you solve problems that require a notion of state?) The way to resolve these problems (and many more) is to construct a monad (a concept from category theory).

The type system in Haskell lets you create many interesting data structures. At first glance it looks like the type system actually forms a really nice category but due to some problems with computability it has some ugly corner cases that make the category bad. However, for the most part you can ignore these corner cases and reason about the type system using category theory.

Not in any you can learn a living with.

>not in any way
>earn
fuck

Wow, the audacity of these retard is astounding.

1) The objects of category do not have to be proper classes. They also do not have to be sets. You can use category theory to talk about objects and morphisms that don't even come close to resembling sets and functions. For instance one can talk about the categories of logical theories and models (there even exists a nice adjunction between these categories, see Lawvere).

2) To really hammer down the point of (1), category theory isn't solely useful for dealing with objects larger than sets. Only a huge retard would believe something like this.

3) Haskell and programming language theory in general uses many concepts from category theory. In particular if you ignore the computability issue I mentioned above then you can consider Haskell's type system as a category, in particular a cartesian closed category.

For more info see:
stackoverflow.com/a/14250022

More importantly (to the Haskell programmer) you very often rely on Kliesli categories when defining a monad. Monads are insanely useful for handling many things in Haskell. Commonly used Monads are the list monad, the maybe monad, the state monad, the future monad (promises), the try monad (exception handling), the enumerable monad, the observable monad (asynchronous version of enumerable), etc..

So much of Haskell's power rests in the programmer being able to build and reason about monads that I have trouble believing that a user can build anything very sophisticated without having some understanding of them. Furthermore, one can extend this argument to comonads which (while less popular than monads) give us other useful data structures like trees.

Haskell also includes built in notions of functors and natural transformations. Haskell also has a dependently typed cousin named Agda which can be used to formalize mathematical proofs.

I fucking hate pretentious idiots like you. Please kill yourself.

>you really don't know what sets are if you think ordered tuples aren't sets
Tuples aren't sets per se. This is not to say you can't define tuples in terms of sets.

A set doesn't have an inherent order. Whether it can be ordered at all depends on your notion of "set".

>the point is they're all sets. there's no need for specialized machinery when all you're dealing with are sets and operation-preserving functions. the only thing that category theory is giving you is the lingo you're using to describe your morphisms
That's exactly like saying defining Hilbert spaces is useless because they are all just sets, and everything is a set and so we shouldn't introduce any synthetic structures anymore, or even theories!
In category theory, you may introduce functors and natural transformations, and limits, etc. etc., and they are useful abstracts (like vectors are) whether or not all the things you talk about can be defined in terms of sets.

what is this pic?

>Leadership full of PhD:s.

No reason to bother then. They'll just look down on me anyway.

#
a mine feel meme and the diagram used to illustrate the proof of the Yoneda lemma - a theorem mentioned in this thread before

Can someone address this?

Great response. In your link what does it mean for bottom to be in Hask but not Set? Does bottom here represent non-terminating function or computation? Is it considered a "type"?

Also. What does the Hom(a,b) notation represent? A Morphism from source object a to target object b in your category C?

>What does the Hom(a,b) notation represent?
All arrows pointing from a to b in C.

>there's no need for specialized machinery when all you're dealing with are sets and operation-preserving functions.
Like in algebraic topology, which all this machinery was invented to handle?

Thanks, any insight into my other question?

Is this supposed to be a preorder? Your compositions are all backwards, and your first post was wrong as well. You're trying to take the arrow b->c and then do the arrow a->b. It should be written

[math] (c,d) \circ \left( (b,c) \circ (a,b) \right) [/math]

Also, all of this bellyaching ITT about associativity of composition in preorders is stupid. There is at most one arrow between any two objects, so of course composition is associative.

I'm going off what this person posted: which according to you is incorrect.

>Also, all of this bellyaching ITT about associativity of composition in preorders is stupid. There is at most one arrow between any two objects, so of course composition is associative.

I want to prove it, not see it's obvious and not think about the definitions.

So does this prove right associativity?

(c,d)∘((b,c)∘(a,b))
(c,d)∘(a,c))
(a,d)

and left associativity?

((c,d)∘(b,c))∘(a,b)
(b,d)∘(a,b)
(a,d)

>There is at most one arrow between any two objects,
lel

>I want to prove it, not see it's obvious and not think about the definitions.
Look, suppose you have arrows (a,b), (b,c), (c,d). Note that both

[math] (c,d) \circ \left( (b,c) \circ (a,b) \right) \text{ and } \left( (c,d) \circ (b,c) \right) \circ (a,b) [/math]

are arrows a->d. But there is exactly one arrow a->d, so both arrows must be the same! Your proof is the same concept, but I think this cuts to the core a little better.

>I don't know what a preorder is
lel

I've written down some notes on the category of all Haskell types on the nLab and here:

axiomsofchoice.org/Hask
axiomsofchoice.org/Haskell_type_system

Okay this makes sense, thanks for the explanation!

Cool, I am looking into these links.

Few simple questions:
I am new to programming so not knowledgable yet about things..

How should I think about polymorphism? II'm trying to properly understand the context of what polymorphism is here as I haven't came across it prior to peaking inside programming concepts.

For example what does it mean for a function to be polymorphic, such as id? Is it simply saying that the function can take on different haskell types?

Also, few other questions:

> Hask \mathbf{Hask}, whose objects are Haskell types and whose morphisms are extensionally identified Haskell functions. Subcategories of types can be identified.

What does "extensionally identified Haskell functions." mean?


Additionally, what does Cartesian closed mean?

>What does "extensionally identified Haskell functions." mean?
Do you know
en.wikipedia.org/wiki/Extensionality

Category theory doesn't know about material equality, just isomosphism. You may import equalities from other theories, and Hask only works for extensionally identified functions:

As computer code, the functions given by
f1(n) := '(x-3)^2'
and
f2(n) := 'x^2-6x+9'
and
f3(n) := '8+x^2-6x+1'
are different. Consider for example a (meta-)property LengthOfCode.
LengthOfCode(f1) = 7
LengthOfCode(f2) = 8
LengthOfCode(f3) = 10

However, they all evaluate to the same values from the same inputs - same extensions.

Equality in predicate calculus comes with the axioms
a=b b=a
and
a=a

When reasoning about strings, you need "5+4" and "2+7" to be different.

But to prove
5+4 = 2+7
you need a theory giving meaning to +. For arithmetic, that's e.g. done in pic related
(where 1 is taken to be S(0), and 2 is taken to be S(1), and 3 is taken to be S(2), and so on.

>How should I think about polymorphism?
Read
en.wikipedia.org/wiki/Parametric_polymorphism
?
Your background?

Consider the following function:
id_on_Int : Int -> Int
id_on_Int(n) := n
e.g. id_on_Int(3) evaluates to 3.

Now tell me an analog function of type
String -> String
Of course,
id_on_String(s) := s
does the job.

Generally, in Haskell you'd write a generic function
id : forall a. a -> a
id(x) := x
and when you pass 3 to id, the compiler/the compiled program does type inference on 3 and converts id : forall a. a -> a to a function Int->Int and keeps working with that. (google Hindley–Milner)

Consider the following function:
i2s : Int -> String
i2s(n) := ...write down the number as a word

e.g. i2s(3) evaluates to "three".

Keep this function of type
Int -> String
in mind, and tell me an analog function of type
Int -> a
where a is a generic type.
You can't, really.

Another generic function would be the one which takes a pair (x,y) and returns (y,x).

>Additionally, what does Cartesian closed mean?
how about you use google and Wikipedia before you ask question, mate?
Or nLab itself, I'd think you copied that Hask citation from there

Thanks, you are quite talented at explaining things.

I understood everything you wrote up until this point:

"Keep this function of type
Int -> String
in mind, and tell me an analog function of type
Int -> a
where a is a generic type.
You can't, really. "

Why can't you have a generic type Int -> a?

Say a is of type list then Int -> List
Say a is of type string then Int -> String (your example)
Say a is of type Int then Int -> Int

That part confuses me.

If that is the case then let * represent any type

Can you have a generic type of: * -> *

?

'a' and 'b' are the standard/generic type variables.
The type family Int->a is perfectly okay, but there is no function of this type, because you don't know anything about a.

Let me give you a list of examples to make it clearer.
In the following, let (a,b) denote the product of type a and b, and for x:a and y:b, let (x,y) denote a term of (a,b)

- Is there a function of type Int->Int?
Yes, there is. For example

f : Int->Int
f(n):=n

g : Int->Int
g(n):=n+1

h : Int->Int
h(n):=(n-3)^5

- Is there, for all 'a', a function of type 'a -> a'?
Yes, there is. Namely
f:a->a
f(x):=a

- Is there a function of type Int -> (Int,Int)?
Yes, for example
f : Int -> (Int,Int)
f(n):=(2n,n^3)

- Is there, for all 'a' and all 'b', a function of type (a,b)->(b,a)?
Yes, the swap function
f : (a,b)->(b,a)
f(x,y):=(y,x)

- Is there, for all 'a', a function of type a->(a,Int)?
Yes, for example
f : a->(a,Int)
f(x):=(x,7)
or
g : a->(a,Int)
g(x):=(x,-8)

- Is there, for all 'a' and all 'b', a function of type (a,a)->(b,a)?
No! For generic b, you don't know any term so you can't code a function of this type.

- Is there, for all 'a' and all 'b', a function of type (a->b) -> Lists_of_a's -> Lists_of_a's?
Yes, that's what's called a functor for lists. If u is a function variable of type a->b, and if a_list_of_a's is a list of a's, then define
f(u)(a_list_of_a's)
to evaluate to the list of b's obtained by applying u to all element of the list of a's

E.g. if a and b is the type Int, and if v(n)=n+10, then
f(v)([-4,6,50])
evaluates to
[4,16,60]

- Is there, for all 'a', a function of type a->Int?
Yes, for example
f : a->Int
f(x):=5
(the constant function that always maps to 5)

- Is there, for all 'a', a function of type Int->a?
No, you don't know anything about a.

cont.

The secret to when a generic function type has a function is given by the Curry-Howard isomosphism.

Let A,B,C, ... be logical proposition

The proposition
[math]A\to A[/math]
says "A implies A".
It's true.
The trivial prove is:
Given a reason to believe in A, it follows that we have a reason to believe A.
It's represented by

f(x):=x

The proposition
[math](A\land B)\to (B\land A)[/math]
says that if "A and B" is true, then "B and A" is also true. Pretty much part of the deifnition of "and".
The prove is the swap function.

Okay, let's make something slightly less trivial:

The proposition
[math](A\to (B\to C))\to ((A\land B)\to C)[/math]
says "IF from A being true follows that B implies C, then if "A and B" is true, it follows that C is true."
The prove is to juggle with reasons to believe in the propositions again:
Assume we have a reason to believe that [math](A\to (B\to C))[/math] holds. Then given a reason to believe "A and B", we know that "A" hold and also that "B" holds individually. As we've assumed [math](A\to (B\to C))[/math] holds, we know that [math]B\to C[/math] holds, and finally that [math]C[/math] holds. In total, we follows that under the assumption, C is already implied by "A and B".

The formal proof is as follows:
If f is a function variable for a -> (b -> c), so that f(x) is in b->c and f(x)(y) is in c.
Then
g: (a->(b->c)) -> ((a,b) -> c)
g(f)((x,y)):=f(x)(y)

Under the curry howard iso morphism, a generic type having a term corresponds to the corresponding proposition having a proof.

You can't find a term for the type
a
or
Int -> a
You can try, but it's impossible.
The "reason" is that if you could find a term for a, this would mean you could prove any proposition - which is silly.

Trying to generalize what you're saying here using terminology I know that may not appropriately describe what I'm trying to say but should express the idea...

It seems like the "types that don't work" are those that are not bounded by your input argument. So in other words it doesn't allow for "free variables" or types that aren't bounded by some argument in your input parameter.

For example:
f : a->(a,Int)

"works" because type 'a' appears as an argument in the input.

f : (a,b)->(b,a)

"works" because type 'a' and type 'b' are arguments that appear in your input thus "bounded" (using my phrasing)

Is there, for all 'a' and all 'b', a function of type (a,a)->(b,a)?

doesn't "work" because type 'b' is not "bounded" in your input argument so you don't know what "type" b represents.

Is this the right way to look at it? I'm trying to describe it best way I can.

Yes, that's what the lambda calculus does: arguments for function become literally bound. by lambdas

Introducing the function
f(n):=(n+1)^2
requires you to give it a name.
The corresponding lambda term is
[math] \lambda n.\, (n+1)^2 [/math]
and see the Haskell implementation in the next pic related.

See also
en.wikipedia.org/wiki/Brouwer–Heyting–Kolmogorov_interpretation

Thank you. I'm going to take the time to read through some of these links.

Also note that the theorems

(A → (B → C)) → ((A ∧ B) → C)
((A ∧ B) → C) → (A → (B → C))

or

(a->b->c) -> ((a,b)->c)
((a,b)->c) -> (a->b->c)
having a term

corresponds to the bijection of the sets

[math] C^{A\times B} [/math] with [math] {C^{B}}^A [/math]

of the equation of numbers

[math] c^{a·b} = (c^b)^a [/math]

In category theory, that's a bijection between

[math] Hom(A\times B,C) [/math] and [math] Hom(A,hom(B,C)) [/math]

where hom(B,C) is a representation of the hom set Hom(B,C) as object within the category.

Just like the function space [math]X\to Y[/math] between two sets can be represented by a set [math]Y^X[/math].

I point this out because you (someone) asked what a cartesian closed category is, and the answer is that it's basically one where the above holds. And in fact, you'd define the internal hom in terms of the product with this equation (you'd say that the product functor is required to have a right-adjoint)

The idea of the theory is to bring all those structual equalities into one framework.

Of all the fuckin places too, target.