SICP Exercises

Chapter 01

Exercise 1.1

Below is a sequence of expressions. What is the result printed by the interpreter in response to each expression? Assume that the sequence is to be evaluated in the order in which it is presented.

10
(+ 5 3 4)
(- 9 1)
(/ 6 2)
(+ (* 2 4) (- 4 6))
(define a 3)
(define b (+ a 1))
(+ a b (* a b))
(= a b)
(if (and (> b a) (< b (* a b)))
    b
    a)
(cond ((= a 4) 6)
      ((= b 4) (+ 6 7 a))
      (else 25))
(+ 2 (if (> b a) b a))
(* (cond ((> a b) a)
         ((< a b) b)
         (else -1))
   (+ a 1))
10
12
8
3
6
19
false
4
16
6
16

Exercise 1.2

Translate the following expression into prefix form

\[ \frac{5+4+(2-(3-(6+\frac{1}{3})))}{3(6-2)(2-7)} \]
(/ (+ 5 4 (- 2 (- 3 (6 + (/ 1 3)))))
   (* 3 (- 6 2) (- 2 7)))

Exercise 1.3

Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers.

(define (sum-square-2-largest a b c)
        (cond ((and (<= a b) (<= a c)) (+ (* b b) (* c c)))
              ((and (<= b a) (<= b c)) (+ (* a a) (* c c)))
              ((and (<= c a) (<= c b)) (+ (* a a) (* b b)))))

Exercise 1.4

Observe that our model of evaluation allows for combinations whose operators are compound expressions. Use this observation to describe the behavior of the following procedure:

(define (a-plus-abs-b a b)
  ((if (> b 0) + -) a b))

Exercise 1.5

Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is using applicative-order evaluation or normal-order evaluation. He defines the following two procedures:

(define (p) (p))

(define (test x y)
  (if (= x 0)
      0
      y))

Then he evaluates the expression

(test 0 (p))

What behavior will Ben observe with an interpreter that uses applicative-order evaluation? What behavior will he observe with an interpreter that uses normal-order evaluation? Explain your answer. (Assume that the evaluation rule for the special form if is the same whether the interpreter is using normal or applicative order: The predicate expression is evaluated first, and the result determines whether to evaluate the consequent or the alternative expression.)

Exercise 1.6

Alyssa P. Hacker doesn't see why if needs to be provided as a special form. "Why can't I just define it as an ordinary procedure in terms of cond?" she asks. Alyssa's friend Eva Lu Ator claims this can indeed be done, and she defines a new version of if:

(define (new-if predicate then-clause else-clause)
  (cond (predicate then-clause)
        (else else-clause)))

Eva demonstrates the program for Alyssa:

(new-if (= 2 3) 0 5)
5

(new-if (= 1 1) 0 5)
0

Delighted, Alyssa uses new-if to rewrite the square-root program:

(define (sqrt-iter guess x)
  (new-if (good-enough? guess x)
           guess
          (sqrt-iter (improve guess x)
                     x)))

What happens when Alyssa attempts to use this to compute square roots? Explain.

The computation of square will never terminate. In an normal-order interpreter, the recursion will never end, as already seen in previous exercise. Still, even in a applicative-order interpreter, use new-if will not always ensure the recursion terminate.

When the interpreter try to eval the sqrt, it will come across the `(new-if (good-enough? ...) guess (sqrt-iter ...))`. Here, the interpreter will eval. (good-enough? ...), guess, (sqrt-iter ...) accordingly. And when evaluating iter-iter, it will come across the new-if again. This means the evaluation will never terminate.

The difference between between the 'internal' if and the new-if is that the internal if is a primitive instruction, its evaluation process differs from normal procedure. if will evaluate predicate first, and then only evaluate one of then-clause and else-clause on predicate result, which the new-if failed when evaluating a recursive procedure.

Exercise 1.7

The good-enough? test used in computing square roots will not be very effective for finding the square roots of very small numbers. Also, in real computers, arithmetic operations are almost always performed with limited precision. This makes our test inadequate for very large numbers. Explain these statements, with examples showing how the test fails for small and large numbers. An alternative strategy for implementing good-enough? is to watch how guess changes from one iteration to the next and to stop when the change is a very small fraction of the guess. Design a square-root procedure that uses this kind of end test. Does this work better for small and large numbers?

Suppose we want to find the square root of 1010, which is 105, the evaluation go on like this:

guess x/guess avg. abs(avg2-x)
1 1-10 1 + 5*10-11 nearly 1010 - 1
1 + 5*10-11 0.999*10-10 1 + 9.99*10-11 nearly 1010 - 1
... ... ... ...
As we can see here, the evaluation converges slowly, in fact, it will never converges since the difference avg. is always a number slightly large than 1, and their difference is too small in comparison of 1010 and may be rounded off. So the eval will fail on large number.

The case with very small number is better, the evaluation will terminate, but with a wrong value. This is mainly caused by the way with the definition of good-enough?. When the guess itself is smaller than the threshold in good-enough?, the iteration will terminate, whatever it is the proper square root.

Using change between two guess as the termination condition, the good-enugh? may look like:

(define TOL 0.00001)
(define (good-enough? old-guess guess)
        (if (= guess 0)
            true
            (< (abs (/ (- guess old-guess) guess)) TOL)))

And sqrt-iter should modify accordingly:

(define (sqrt-iter guess x)
    ((define old-guess guess)
     (define new-guess (improve guess x)
        (if (good-enough? old-guess new-guess)
            old-guess
            (sqrt-iter new-guess x)))

And this version of good-enough? will terminate on large number, with a wrong answer, but good with small number. The poor condition with large number is majorly the restriction of Newton's method, which means there nothing else to do to improve the problem with large number without a better way to find the initial guess value, other thane simply take 1.

More on Newton's Method

Exercise 1.8

Newton's method for cube roots is based on the fact that if \(y\) is an approximation to the cube root of \(x\), then a better approximation is given by the value

\[ y^*=\frac{x/y^2 + 2y}{3} \]

Use this formula to implement a cube-root procedure analogous to the square-root procedure. (In section 1.3.4 we will see how to implement Newton's method in general as an abstraction of these square-root and cube-root procedures.)

(define (cube-root x)
    (define TOL 0.00001)
    (define (cube-iter guess)
        (if (good-enough? guess)
            guess
            (cube-iter (guess-next guess))))
    (define (good-enough? guess)
        (< (abs (- (* guess guess guess) x )) TOL))
    (define (guess-next y)
        (/ (+ (/ x (* y y)) y y) 3))
    (cube-iter 1.0))

Exercise 1.9

Each of the following two procedures defines a method for adding two positive integers in terms of the procedures inc, which increments its argument by 1, and dec, which decrements its argument by 1.

(define (+ a b)
  (if (= a 0)
      b
      (inc (+ (dec a) b))))

(define (+ a b)
  (if (= a 0)
      b
      (+ (dec a) (inc b))))

Using the substitution model, illustrate the process generated by each procedure in evaluating (+ 4 5). Are these processes iterative or recursive?

Exercise 1.10

The following procedure computes a mathematical function called Ackermann's function.

(define (A x y)
  (cond ((= y 0) 0)
        ((= x 0) (* 2 y))
        ((= y 1) 2)
        (else (A (- x 1)
                 (A x (- y 1))))))

What are the values of the following expressions?

(A 1 10)
(A 2 4)
(A 3 3)

Consider the following procedures, where A is the procedure defined above:

(define (f n) (A 0 n))
(define (g n) (A 1 n))
(define (h n) (A 2 n))
(define (k n) (* 5 n n))

Give concise mathematical definitions for the functions computed by the procedures f, g, and h for positive integer values of n. For example, (k n) computes \(5n^2\).

Exercise 1.11

A function f is defined by the rule that \(f(n) = n\) if \(n<3\) and \(f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3)\) if \(n > 3\). Write a procedure that computes \(f\) by means of a recursive process. Write a procedure that computes \(f\) by means of an iterative process.

(define (f-recursive n)
    (if (< n 3)
        n
        (+ (f-recursive (- n 1))
           (* (f-recursive (- n 2)) 2)
           (* (f-recursive (- n 3)) 3))))

(define (f-iterative n)
    (define (f-iter n n-1 n-2 i)
        (if (= i 0)
            n
            (f-iter (+ n (* 2 n-1) (* 3 n-2)) n n-1 (- i 1))))
    (if (< n 3)
        n
        (f-iter 3 2 1 (- n 3))))

Exercise 1.12

The following pattern of numbers is called Pascal's triangle.

\[ \begin{array}{cccccccccc} & & & & & & 1 & & & & & & \\ & & & & & 1 & & 1 & & & & & \\ & & & & 1 & & 2 & & 1 & & & & \\ & & & 1 & & 3 & & 3 & & 1 & & & \\ & & 1 & & 4 & & 6 & & 4 & & 1 & & \\ & 1 & & 5 & & 10 & & 10 & & 5 & & 1 & \end{array} \]

The numbers at the edge of the triangle are all 1, and each number inside the triangle is the sum of the two numbers above it. Write a procedure that computes elements of Pascal's triangle by means of a recursive process.

(define (pascal m n)
    (cond ((< m n) 0)
          ((= m n) 1)
          ((or (= n 0) (= m 0))  1)
          (else (+ (pascal (- m 1) n) (pascal (- m 1) (- n 1))))))

Simple mathematical fact: Pascal's Tringal is the value table for \(C_m^n=\frac{m{\rm !}}{(m-n){\rm !}\cdot n{\rm !}}\)

Exercise 1.13

Prove that \(Fib(n)\) is the closest integer to \(\varphi^n/\sqrt{5}\), where \(\varphi=(1+\sqrt{5})/2\). Hint: Let \(\psi=(1-\sqrt{5})/2\). Use induction and the definition of the Fibonacci numbers (see section 1.2.2) to prove that \(Fib(n) = (\varphi^n - \phi^n)/\sqrt{5}\).

Proof:

The induction definition of Fibonacci numbers, say \(Fib(n)=Fib(n-1)+Fib(n-2)\), implies its characteristic function: \(\lambda^2=\lambda+1\), whose roots are \(\varphi, \psi\) given. Hence,

\[ Fib(n)=c_1\varphi^n+c_2\psi^2 \]

with constrations:

\[ \begin{alignat}{3} Fib(1) &=& c_1\varphi &+& c_2\psi &=& 1 \\ Fib(2) &=& c_1\varphi^2 &+& c_2\psi^2 &=& 1 \end{alignat} \]

Solve this will get \(c_1=-c_2=\sqrt{5}\), which means

\[ Fib(n)=(\varphi^n - \psi^n)/\sqrt{5} \]

Exercise 1.14

Draw the tree illustrating the process generated by the count-change procedure of section 1.2.2 in making change for 11 cents. What are the orders of growth of the space and number of steps used by this process as the amount to be changed increases?

(cc 11 5) = 4
    +-- (cc -39 5) <-- 0
    +-- (cc 11 4) = 1
        +-- (cc -14 4) <-- 0
        +-- (cc 11 3) = 1
        |   +-- (cc 1 3)
        |       +-- (cc -9 3) <-- 0
        |       +-- (cc 1 2)
        |           +-- (cc -4 2) <-- 0
        |           +-- (cc 1 1)
        |               + (cc 0 1) <-- 1
        +-- (cc 11 2) = 2
        |   +-- (cc 6 2) = 2
        |       +-- (cc 1 2) = 0
        |       |   +-- (cc -4 2) <-- 0
        |       +-- (cc 1 1) = 1
        |       |   +-- (cc 0 1) <-- 1
        |       +-- (cc 6 1) = 1
        |           + ... +-- (cc 0 1) <-- 1
        +-- (cc 11 1) = 1e
            + ... +-- (cc 0 1) <-- 1

Since this recursion basically enumerate every possible composition of change makes up to the given amount, the time complexity is about \(\prod\limits_{i} \frac{n}{c_i}\), where \(c_i\) is the changes.

Exercise 1.15

The sine of an angle (specified in radians) can be computed by making use of the approximation \(\sin x \approx x\) if x is sufficiently small, and the trigonometric identity

\[ \sin x=3\sin\frac{x}{3} + 4\sin^3\frac{x}{3} \]

to reduce the size of the argument of sin. (For purposes of this exercise an angle is considered "sufficiently small" if its magnitude is not greater than 0.1 radians.) These ideas are incorporated in the following procedures

(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
   (if (not (> (abs angle) 0.1))
       angle
       (p (sine (/ angle 3.0)))))
  1. How many times is the procedure p applied when (sine 12.15) is evaluated?
  2. What is the order of growth in space and number of steps (as a function of a) used by the process generated by the sine procedure when (sine a) is evaluated?

Exercise 1.16

Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps, as does fast-expt.

Exercise 1.17

The exponentiation algorithms in this section are based on performing exponentiation by means of repeated multiplication. In a similar way, one can perform integer multiplication by means of repeated addition. The following multiplication procedure (in which it is assumed that our language can only add, not multiply) is analogous to the expt procedure:

(define (* a b)
  (if (= b 0)
      0
      (+ a (* a (- b 1)))))

This algorithm takes a number of steps that is linear in b. Now suppose we include, together with addition, operations double, which doubles an integer, and halve, which divides an (even) integer by 2. Using these, design a multiplication procedure analogous to fast-expt that uses a logarithmic number of steps.

(define (fast-mult a b)
    (cond ((= b 0)   0)
          ((even? b) (+ a (fast-mult a (/ b 2))))
          (else      (+ b (fast-mult a (- b 1))))))

Exercise 1.18

Using the results of exercises 1.16 and 1.17, devise a procedure that generates an iterative process for multiplying two integers in terms of adding, doubling, and halving and uses a logarithmic number of steps.

(define (fast-mult-iterative a b)
    (define (mult-iter b s)
        (cond ((= b 0) 0)
              ((even? b) (mult-iter (/ b 2) (+ s s))))
              (else (multi-iter (- b 1) (+ s a))))
    (mult-iter b 0))

Exercise 1.19

There is a clever algorithm for computing the Fibonacci numbers in a logarithmic number of steps. Recall the transformation of the state variables a and b in the fib-iter process of section 1.2.2: \((a,a+b)\) and \((b,a)\). Call this transformation \(T\), and observe that applying \(T\) over and over again n times, starting with 1 and 0, produces the pair Fib(n + 1) and Fib(n). In other words, the Fibonacci numbers are produced by applying \(T^n\), the nth power of the transformation \(T\), starting with the pair \((1,0)\). Now consider \(T\) to be the special case of \(p=0\) and \(q=1\) in a family of transformations \(T_{pq}\), where \(T_{pq}\) transforms the pair \((a,b)\) according to \((a,bq+aq+ap)\) and \((b,bp+aq)\). Show that if we apply such a transformation \(T_{pq}\) twice, the effect is the same as using a single transformation \(T_{p'q'}\) of the same form, and compute \(p'\) and \(q'\) in terms of \(p\) and \(q\). This gives us an explicit way to square these transformations, and thus we can compute Tn using successive squaring, as in the fast-expt procedure. Put this all together to complete the following procedure, which runs in a logarithmic number of steps:

(define (fib n)
  (fib-iter 1 0 0 1 n))
(define (fib-iter a b p q count)
  (cond ((= count 0) b)
        ((even? count)
         (fib-iter a
                   b
                   <??>      ; compute p'
                   <??>      ; compute q'
                   (/ count 2)))
        (else (fib-iter (+ (* b q) (* a q) (* a p))
                        (+ (* b p) (* a q))
                        p
                        q
                        (- count 1)))))
(define (compute-p0 a b p q)
   (+ (* p a) (* q b))
(define (compute-q0 a b p q)
    (+ (* a p q) (* b (+ p (* q q)))))

Exercise 1.20

The process that a procedure generates is of course dependent on the rules used by the interpreter. As an example, consider the iterative gcd procedure given above. Suppose we were to interpret this procedure using normal-order evaluation, as discussed in section 1.1.5. (The normal-order-evaluation rule for if is described in exercise 1.5.) Using the substitution method (for normal order), illustrate the process generated in evaluating (gcd 206 40) and indicate the remainder operations that are actually performed. How many remainder operations are actually performed in the normal-order evaluation of (gcd 206 40)? In the applicative-order evaluation?

; normal-order evaluation will expand the evaluation like this
(if (= 0 40)
    40
    (if (= 0 (reminder 206 40))
        (reminder 206 40)
        (if (= 0 (reminder 40 (reminder 206 40)))
            (reminder 40 (reminder 206 40))
            (if ... ... ) ; you get the idea
            )))
; so, there will be 0 + 2 + 4 + 8 + 16 = 30 times the reminder be performed.

; applicative-order evaluation
(gcd 206 40)
(gcd 40 6)
(gcd 6 4)
(gcd 4 2)
2
; each gcd will perform reminder only once. Hence 4 times in total.

Exercise 1.21

Use the smallest-divisor procedure to find the smallest divisor of each of the following numbers: 199, 1999, 19999.

Exercise 1.22

Most Lisp implementations include a primitive called runtime that returns an integer that specifies the amount of time the system has been running (measured, for example, in microseconds). The following timed-prime-test procedure, when called with an integer n, prints n and checks to see if n is prime. If n is prime, the procedure prints three asterisks followed by the amount of time used in performing the test.

(define (timed-prime-test n)
  (newline)
  (display n)
  (start-prime-test n (runtime)))
(define (start-prime-test n start-time)
  (if (prime? n)
      (report-prime (- (runtime) start-time))))
(define (report-prime elapsed-time)
  (display " *** ")
  (display elapsed-time))

Using this procedure, write a procedure search-for-primes that checks the primality of consecutive odd integers in a specified range. Use your procedure to find the three smallest primes larger than 1000; larger than 10,000; larger than 100,000; larger than 1,000,000. Note the time needed to test each prime. Since the testing algorithm has order of growth of \(O(\sqrt{n})\), you should expect that testing for primes around 10,000 should take about 10 times as long as testing for primes around 1000. Do your timing data bear this out? How well do the data for 100,000 and 1,000,000 support the n prediction? Is your result compatible with the notion that programs on your machine run in time proportional to the number of steps required for the computation?

The search-for-prime procedure:

(define (search-for-primes s)
    (if (prime? s)
        s
        (search-for-primes (+ 1 s))))

Since with i7-2760QM CPU @ 2.40GHz, the runtime make no difference between a test with 105 and with 1010 (both 0.00s), I use cpulimit limit the CPU usage of the process up to 1%, and here's the test result:

number testing prime elapsed time
1010 10000000019 0.1340s
1012 1000000000039 1.27s
1014 100000000000031 12.14s

Basically, the running time is roughly \(O(\sqrt{n})\)

Exercise 1.23

The smallest-divisor procedure shown at the start of this section does lots of needless testing: After it checks to see if the number is divisible by 2 there is no point in checking to see if it is divisible by any larger even numbers. This suggests that the values used for test-divisor should not be 2, 3, 4, 5, 6, ..., but rather 2, 3, 5, 7, 9, .... To implement this change, define a procedure next that returns 3 if its input is equal to 2 and otherwise returns its input plus 2. Modify the smallest-divisor procedure to use (next test-divisor) instead of (+ test-divisor 1). With timed-prime-test incorporating this modified version of smallest-divisor, run the test for each of the 12 primes found in exercise 1.22. Since this modification halves the number of test steps, you should expect it to run about twice as fast. Is this expectation confirmed? If not, what is the observed ratio of the speeds of the two algorithms, and how do you explain the fact that it is different from 2?

; new next for test-divisor
(define (next test-divisor)
    (if (= 2 test-divisor) 3 (+ test-divisor 2)))

Again, limit the CUP usage up to 1%, here's the test result:

testing prime elapsed time
10000000019 0.07
1000000000039 0.67
100000000000031 6.60

Almost halve the runtime.

Exercise 1.24

Modify the timed-prime-test procedure of exercise 1.22 to use fast-prime? (the Fermat method), and test each of the 12 primes you found in that exercise. Since the Fermat test has \(O(\log n)\) growth, how would you expect the time to test primes near 1,000,000 to compare with the time needed to test primes near 1000? Do your data bear this out? Can you explain any discrepancy you find?

(define (timed-fast-prime-test n)
  (newline)
  (display n)
  (start-prime-test n (runtime)))
(define (start-prime-test n start-time)
  (if (fast-prime? n)
      (report-prime (- (runtime) start-time))))
(define (report-prime elapsed-time)
  (display " *** ")
  (display elapsed-time))

Exercise 1.25

Alyssa P. Hacker complains that we went to a lot of extra work in writing expmod. After all, she says, since we already know how to compute exponentials, we could have simply written

(define (expmod base exp m)
  (remainder (fast-expt base exp) m))

Is she correct? Would this procedure serve as well for our fast prime tester? Explain.

This version of expmod will indeed faster then original expmod with small base and exp, for it shall only eval remainder once. But with a larger base and/or exp, this expmod will much slower and may even yield the wrong answer, simply for the (fast-expt base exp) will be quite a large number, so either the calculation large number with precision involved here, which is known to be very slow, or just simply overflowed, which leads to the wrong answer.

The original version of expmod do the remainder in each of its recursion, by which prevent the occurrence of large number during the process of eval.

Exercise 1.26

Louis Reasoner is having great difficulty doing exercise 1.24. His fast-prime? test seems to run more slowly than his prime? test. Louis calls his friend Eva Lu Ator over to help. When they examine Louis's code, they find that he has rewritten the expmod procedure to use an explicit multiplication, rather than calling square:

(define (expmod base exp m)
  (cond ((= exp 0) 1)
        ((even? exp)
         (remainder (* (expmod base (/ exp 2) m)
                       (expmod base (/ exp 2) m))
                    m))
        (else
         (remainder (* base (expmod base (- exp 1) m))
                    m))))

"I don't see what difference that could make," says Louis. "I do." says Eva. "By writing the procedure like that, you have transformed the \(O(\log n)\) process into a \(O(n)\) process." Explain.

The original expmode use square to reduce the calling of expmode, which the modified version has failed. With an even base, the modified expmod will call itself twice. By doing so, the modified expmod makes no difference than simply multiply base with itself exp times, but with the overhead of recursive calling, which makes the prime? even slower.

Exercise 1.27

Demonstrate that the Carmichael numbers listed in footnote 47 really do fool the Fermat test. That is, write a procedure that takes an integer \(n\) and tests whether \(a^n\) is congruent to \(a\) modulo \(n\) for every \(a<n\), and try your procedure on the given Carmichael numbers.

(define (cong n)
    (define (cong-iter a)
        (if (= n a) #T
            (and (= a (expmod a n n)) (cong-iter (+ a 1)))))
    (cong-iter 1))

Exercise 1.28

One variant of the Fermat test that cannot be fooled is called the Miller-Rabin test (Miller 1976; Rabin 1980). This starts from an alternate form of Fermat's Little Theorem, which states that if \(n\) is a prime number and \(a\) is any positive integer less than \(n\), then \(a\) raised to the \((n - 1)\)st power is congruent to 1 modulo \(n\). To test the primality of a number \(n\) by the Miller-Rabin test, we pick a random number \(a < n\) and raise \(a\) to the \((n - 1)\)st power modulo \(n\) using the expmod procedure. However, whenever we perform the squaring step in expmod, we check to see if we have discovered a "nontrivial square root of 1 modulo \(n\)," that is, a number not equal to 1 or \(n - 1\) whose square is equal to 1 modulo \(n\). It is possible to prove that if such a nontrivial square root of 1 exists, then \(n\) is not prime. It is also possible to prove that if \(n\) is an odd number that is not prime, then, for at least half the numbers \(a<n\), computing an-1 in this way will reveal a nontrivial square root of 1 modulo \(n\). (This is why the Miller-Rabin test cannot be fooled.) Modify the expmod procedure to signal if it discovers a nontrivial square root of 1, and use this to implement the Miller-Rabin test with a procedure analogous to fermat-test. Check your procedure by testing various known primes and non-primes. Hint: One convenient way to make expmod signal is to have it return 0.

(define (expmod-M-R-test base e m)
    (define (C1-to-0 a) (if (= 1 a) 0 a))
    (cond ((= e 1)    1)
          ((even? e) (C1-to-0 (remainder (square (expmod-M-R-test base (/ e 2) m)) m)))
          (else      (remainder (* base (expmpd-M-R-test base (- e 1) m)) m))))

(define (M-R-prime-test n a)
    (= a (expmod-M-R-test a n n)))

Exercise 1.29

Simpson's Rule is a more accurate method of numerical integration than the method illustrated above. Using Simpson's Rule, the integral of a function \(f\) between \(a\) and \(b\) is approximated as

\[ \frac h3 [y_0 + 4y_1 + 2y_2 + 4y_3 + 2y_4 + \cdots + 2y_{n-2} + 4y_{n-1} + y_n] \]

where \(h = (b - a)/n\), for some even integer \(n\), and \(y_k = f(a + kh)\). (Increasing \(n\) increases the accuracy of the approximation.) Define a procedure that takes as arguments \(f\), \(a\), \(b\), and \(n\) and returns the value of the integral, computed using Simpson's Rule. Use your procedure to integrate cube between 0 and 1 (with \(n = 100\) and \(n = 1000\)), and compare the results to those of the integral procedure shown above.

(define (integral-simpson f a b n)
    (define h (/ (- b a) 1.0 n))
    (define (y i) (f (+ a (* i h))))
    (define (iter i sum)
        (cond ((= i n)  (+ sum (y n)))
              ((= i 1)  (iter (+ i 1) (+ sum (y 0))))
              ((odd? i) (iter (+ i 1) (+ sum (* 4 (y i)))))
              (else     (iter (+ i 1) (+ sum (* 2 (y i)))))))
    (/ (* h (iter 0 0.0)) 3.0))

Test result for cube with integral-simpson and integral is shown in the table below:

n integral-simpson integral
100 .2499999866666667 .24998750000000042
1000 .25 .249999875000001

Simpson's Rule has the error of \(\frac{\left|b-a\right|^5}{180n^4}M_4\), where \(M_i=\sup\limits_{x\in [a,b]} \left|f^{(i)}(x)\right|\), while the error of integral, which is Rectangle's Method, is \(\frac{\left|b-a\right|^3}{24n^2}M_2\)

Exercise 1.30

The sum procedure above generates a linear recursion. The procedure can be rewritten so that the sum is performed iteratively. Show how to do this by filling in the missing expressions in the following definition:

(define (sum term a next b)
  (define (iter a result)
    (if <??>
        <??>
        (iter <??> <??>)))
  (iter <??> <??>))
(define (sum term a next b)
  (define (iter a result)
    (if (> a b)
        result
        (iter (next a) (+ result (term a)))))
  (iter a 0.0))

Exercise 1.31

a. The sum procedure is only the simplest of a vast number of similar abstractions that can be captured as higher-order procedures.51 Write an analogous procedure called product that returns the product of the values of a function at points over a given range. Show how to define factorial in terms of product. Also use product to compute approximations to using the formula: \(\frac{\pi}{4} = \frac{2\cdot4\cdot4\cdot6\cdot6\cdot8\cdots}{3\cdot3\cdot5\cdot5\cdot7\cdot7\cdots}\)

(define (product term a next b)
    (define (iter a result)
        (cond ((> a b) result)
              ((= 0 result) 0)
              (else (iter (next a) (* result (term a))))))
    (iter a 1.0))

(define (pi-wallis n)
    (define (incr a) (+ a 1))
    (define (term a) (square (/ (* a 1.0) (+ a 1))))
    (* 8 (product term 1 incr n)))

Exercise 1.32

a. Show that sum and product (exercise 1.31) are both special cases of a still more general notion called accumulate that combines a collection of terms, using some general accumulation function:

(accumulate combiner null-value term a next b)

accumulate takes as arguments the same term and range specifications as sum and product, together with a combiner procedure (of two arguments) that specifies how the current term is to be combined with the accumulation of the preceding terms and a null-value that specifies what base value to use when the terms run out. Write accumulate and show how sum and product can both be defined as simple calls to accumulate.

(define (accumulate combiner null-value term a next b)
    (define (iter a r)
        (if (> a b)
            r
            (combiner (next a) (term a))))
    (iter a null-value))

Exercise 1.33

You can obtain an even more general version of accumulate (exercise 1.32) by introducing the notion of a filter on the terms to be combined. That is, combine only those terms derived from values in the range that satisfy a specified condition. The resulting filtered-accumulate abstraction takes the same arguments as accumulate, together with an additional predicate of one argument that specifies the filter. Write filtered-accumulate as a procedure. Show how to express the following using filtered-accumulate:

a. The sum of the squares of the prime numbers in the interval a to b (assuming that you have a prime? predicate already written)

b. The product of all the positive integers less than n that are relatively prime to n (i.e., all positive integers \(i < n\) such that \(\rm{GCD}(i,n) = 1\)).

(define (filtered-accumulate combiner null-value filter term a next b)
    (define (iter a r)
        (cond ((> a b) r)
              ((filter a) (iter (next a) (combiner (term a) r)))
              (else (iter (next a) r))))
    (iter a null-value))

(define (sum-of-prime a b)
    (define (id a) a)
    (define (incr a) (+ 1 a))
    (filtered-accumulate + 0 prime? id a incr b))

(define (prod-of-relatively-prime n)
    (define (rp-n a) (= 1 (GCD n a)))
    (filtered-accumulate * 1.0 rp-n id 1 incr n))

Exercise 1.34

Suppose we define the procedure

(define (f g) (g 2))

Then we have

(f square)
4

(f (lambda (z) (* z (+ z 1))))
6

What happens if we (perversely) ask the interpreter to evaluate the combination (f f)? Explain.

The evaluate of (f f) would go like this:

(f f) -> (f 2) -> (2 2)

And this will raise an error.

Exercise 1.35

Show that the golden ratio \(\phi\) (section 1.2.2) is a fixed point of the transformation \(x \mapsto 1 + \frac 1x\), and use this fact to compute by means of the fixed-point procedure.

From the definition of \(\phi = \lim_{n\to\infty}\frac{f_{n+1}}{f_n}\), where \(f_n\) is the n-th term of Fibonacci Sequence, since \(\phi > 0\),

\[ 1+\frac 1\phi =1 + 1 / \lim_{n\to\infty}\frac{f_{n+1}}{f_n}=\lim_{n\to\infty}\frac{f_n+f_{n+1}}{f_{n+1}} =\lim_{n\to\infty}\frac{f_{n+2}}{f_{n+1}} =\phi \]

Therefore, \(\phi\) is a fix-point of transformation \(x\mapsto 1+\frac 1x\).

Exercise 1.36

Modify fixed-point so that it prints the sequence of approximations it generates, using the newline and display primitives shown in exercise 1.22. Then find a solution to \(x^x = 1000\) by finding a fixed point of \(x\to log(1000)/log(x)\). (Use Scheme's primitive log procedure, which computes natural logarithms.) Compare the number of steps this takes with and without average damping. (Note that you cannot start fixed-point with a guess of 1, as this would cause division by \(log(1) = 0\).)

Fixed fix-point:

(define TOL 0.00001)
(define (fix-point-display f first-guess)
    (define (close-enough? x next) (> TOL (abs (- x next))))
    (define (report x) (display "Guessing ") (display x) (newline))
    (define (try x)
        (let ((next (f x)))
        (cond ((close-enough? x next) x)
              (else (report x) (try next)))))
    (try first-guess))

And with first-guess set to 2.0, it takes 34 guess to the root of \(x^x=1000\), while with the average damping takes 9 steps to the same root.

Exercise 1.37

a. An infinite continued fraction is an expression of the form

\[ f=\cfrac{N_1}{D_1+\cfrac{N_2}{D_2+\cfrac{N_3}{D_3+\cdots}}} \]

As an example, one can show that the infinite continued fraction expansion with the \(N_i\) and the \(D_i\) all equal to 1 produces \(1/\phi\), where \(\phi\) is the golden ratio (described in section 1.2.2). One way to approximate an infinite continued fraction is to truncate the expansion after a given number of terms. Such a truncation -- a so-called k-term finite continued fraction -- has the form

\[ a_k=\cfrac{N_1}{N_1+\cfrac{N_2}{\ddots+\frac{N_k}{D_k}}} \]

Suppose that n and d are procedures of one argument (the term index i) that return the \(N_i\) and \(D_i\) of the terms of the continued fraction. Define a procedure cont-frac such that evaluating (cont-frac n d k) computes the value of the k-term finite continued fraction. Check your procedure by approximating \(1/\phi\) using

(cont-frac (lambda (i) 1.0)
           (lambda (i) 1.0)
           k)

for successive values of k. How large must you make k in order to get an approximation that is accurate to 4 decimal places?

b. If your cont-frac procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that generates a recursive process.

Only the iterative version

(define (cont-frac n d k)
    (define (iter i ret)
        (cond ((= i 0) ret)
              (else (iter (- i 1) (/ (n k) (+ (d k) ret))))))
    (iter (- k 1) (/ (n k) (d k))))

k needs to be last 12 to get 4 an approximation that is accurate to 4 decimal.

Exercise 1.38

In 1737, the Swiss mathematician Leonhard Euler published a memoir De Fractionibus Continuis, which included a continued fraction expansion for \(e-2\), where \(e\) is the base of the natural logarithms. In this fraction, the \(N_i\) are all 1, and the \(D_i\) are successively 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, .... Write a program that uses your cont-frac procedure from exercise 1.37 to approximate \(e\), based on Euler's expansion.

(define (cont-frac-e n)
    ( + 2 
        (cont-frac (lambda (i) 1.0)
                   (lambda (i) (if (= (remainder i 3) 2) (/ (+ 4 i) 1.5) 1.0))
                   n)))

Exercise 1.39

A continued fraction representation of the tangent function was published in 1770 by the German mathematician J.H. Lambert:

\[ \tan x=\cfrac{x}{1-\cfrac{x^2}{3-\cfrac{x^2}{5-\cdots}}} \]

where \(x\) is in radians. Define a procedure (tan-cf x k) that computes an approximation to the tangent function based on Lambert's formula. k specifies the number of terms to compute, as in exercise 1.37.

(define (tan-cf x k) 
    (cont-frac (lambda (n) (if (= n 1) x (square x)))
               (lambda (n) (+ 1 (* 2 n))
               k))

Exercise 1.40

Define a procedure cubic that can be used together with the newtons-method procedure in expressions of the form

(newtons-method (cubic a b c) 1)

to approximate zeros of the cubic \(x^3 + ax^2 + bx + c\).

(define (cubic a b c)
    (lambda (x) 
        (+ (cube x) (* a (square x)) (* b x) c)))

Exercise 1.41

Define a procedure double that takes a procedure of one argument as argument and returns a procedure that applies the original procedure twice. For example, if inc is a procedure that adds 1 to its argument, then (double inc) should be a procedure that adds 2. What value is returned by (((double (double double)) inc) 5)

(define (double f) (lambda (x) (f (f x))))

And (((double (double double)) inc) 5) should return 13

Exercise 1.42

Let \(f\) and \(g\) be two one-argument functions. The composition \(f\) after \(g\) is defined to be the function \(x\mapsto f(g(x))\). Define a procedure compose that implements composition. For example, if inc is a procedure that adds 1 to its argument, ((compose square inc) 6) should return 49.

(define (compose f g) (lambda (x) (f (g x))))

Exercise 1.43

If \(f\) is a numerical function and \(n\) is a positive integer, then we can form the n-th repeated application of \(f\), which is defined to be the function whose value at \(x\) is \(f(f(...(f(x))...))\). For example, if \(f\) is the function \(x\mapsto x+1\), then the nth repeated application of \(f\) is the function \(x\mapsto x+n\). If \(f\) is the operation of squaring a number, then the n-th repeated application of \(f\) is the function that raises its argument to the \(2n\)-th power. Write a procedure that takes as inputs a procedure that computes \(f\) and a positive integer \(n\) and returns the procedure that computes the nth repeated application of \(f\). Your procedure should be able to be used as follows: ((repeated square 2) 5), returns 625

Hint: You may find it convenient to use compose from exercise 1.42.

(define (repeated f n)
    (define (iter t rf)
        (if (= t n) rf
            (iter (+ t 1) (compose f rf))))
    (iter 0 (lambda (x) x))

Exercise 1.44

The idea of smoothing a function is an important concept in signal processing. If \(f\) is a function and \(\rm{d}x\) is some small number, then the smoothed version of \(f\) is the function whose value at a point \(x\) is the average of \(f(x - dx)\), \(f(x)\), and \(f(x + dx)\). Write a procedure smooth that takes as input a procedure that computes \(f\) and returns a procedure that computes the smoothed \(f\). It is sometimes valuable to repeatedly smooth a function (that is, smooth the smoothed function, and so on) to obtained the \(n\)-fold smoothed function. Show how to generate the \(n\)-fold smoothed function of any given function using smooth and repeated from exercise 1.43.

(define dx 0.0001)
(define (n-smooth f n)
    (define smooth-1
        (lambda (x)
        (/ (+ (f (- x dx)) (f x) (f (+ x dx))) 3)))
    (repeat smooth-1 n))

Exercise 1.45

We saw in section 1.3.3 that attempting to compute square roots by naively finding a fixed point of \(y\mapsto x/y\) does not converge, and that this can be fixed by average damping. The same method works for finding cube roots as fixed points of the average-damped \(y\mapsto x/y^2\). Unfortunately, the process does not work for fourth roots -- a single average damp is not enough to make a fixed-point search for \(y\mapsto x/y^3\) converge. On the other hand, if we average damp twice (i.e., use the average damp of the average damp of \(y\mapsto x/y^3\)) the fixed-point search does converge. Do some experiments to determine how many average damps are required to compute \(n\)-th roots as a fixed-point search based upon repeated average damping of \(y\mapsto x/y^{n-1}\). Use this to implement a simple procedure for computing \(n\)-th roots using fixed-point, average-damp, and the repeated procedure of exercise 1.43. Assume that any arithmetic operations you need are available as primitives.

(define (n-th-root n x) 
    (if (even? n)
        (fix-point ((repeat average-damp (/ n 2))
                        (lambda (y) (/ x 1.0 (exp y (- n 1)))))
               1.0)
        (fix-point (lambda (y) (/ x 1.0 (exp y (- n 1)))) 1.0)))

Exercise 1.46

Several of the numerical methods described in this chapter are instances of an extremely general computational strategy known as iterative improvement. Iterative improvement says that, to compute something, we start with an initial guess for the answer, test if the guess is good enough, and otherwise improve the guess and continue the process using the improved guess as the new guess. Write a procedure iterative-improve that takes two procedures as arguments: a method for telling whether a guess is good enough and a method for improving a guess. iterative-improve should return as its value a procedure that takes a guess as argument and keeps improving the guess until it is good enough. Rewrite the sqrt procedure of section 1.1.7 and the fixed-point procedure of section 1.3.3 in terms of iterative-improve`.

(define (iterative-improve good-enough? improve)
    (lambda (x)
    (if (good-enough? x) x
        (iterative-improve (compose good-enough? improve) improve))))

; rewrite sqrt with iterative-improve
(define TOL 0.0001)
(define (sqrt x)
    (iterative-improve (lambda (g) (> TOL (abs (- x (squre g))))
                       (lambda (g) (/ (+ x (/ x g)) 2)))))

; rewrite fixed-point
(define (fixed-point f)
    (iterative-improve (lambda (y) (> TOL (abs (- y (f y)))))
                       (lambda (y) (f y))))